Health Evidence Knowledge Accelerator (HEvKA) Daily Progress December 22, 2023

194 views
Skip to first unread message

Joanne Dehnbostel

unread,
Dec 23, 2023, 9:15:25 AM12/23/23
to Health Evidence Knowledge Accelerator (HEvKA)

15 people (BA, BSP, CA-D, CE , HK, HL, JD, JT, KS, KW, LW, MA, PW, SM, XY) participated today in up to 3 active working group meetings.

 

*All of our HEvKA Meetings starting January 2, 2024 will use the new link below, the meeting schedule will remain the same:

 

To join any of these meetings:

Microsoft Teams meeting

Join on your computer, mobile app or room device

Click here to join the meeting *New Link

Meeting ID: 279 232 517 719
Passcode: 8pCpbF

Download Teams | Join on the web

Or call in (audio only)

+1 929-346-7156,,35579956#   United States, New York City

Phone Conference ID: 355 799 56#

Find a local number

Meeting support by ComputablePublishing.com

 

 

 

 

Today the Risk of Bias Working Group found 2 terms approved (unsubstantiated interpretation of results, qualitative research bias ). Three terms require more votes for approval and one additional term was defined today (Incoherence among data, analysis, and interpretation) so there are currently 4 risk of bias terms open for vote.

 

Term

Definition

Alternative Terms

Comment for application

Incoherence among data, analysis, and interpretation

 

 

There are one or more mismatches among hypothesis, data collected, data analysis, and results interpretation in the study report.

 

The term mismatch applies to an inappropriate or wrong or inadequate relationship.

bias in qualitative research design

A bias specific to the design of qualitative research.

The qualitative approach used in a study should be appropriate for the research question and problem.

Common qualitative research approaches include (this list is not exhaustive):

Ethnography - The aim of the study is to describe and interpret the shared cultural behavior of a group of individuals. Phenomenology - The study focuses on the subjective experiences and interpretations of a phenomenon encountered by individuals. Narrative research - The study analyzes life experiences of an individual or a group. Grounded theory - Generation of theory from data in the process of conducting research (data collection occurs first). Case study - In-depth exploration and/or explanation of issues intrinsic to a particular case. A case can be anything from a decision-making process, to a person, an organization, or a country. Qualitative description - There is no specific methodology, but a qualitative data collection and analysis, e.g., in-depth interviews or focus groups, and hybrid thematic analysis (inductive and deductive).

Reference - Hong QN, Pluye P, Fàbregues S, Bartlett G, Boardman F, Cargo M, Dagenais P, Gagnon M-P, Griffiths F, Nicolau B, O’Cathain A, Rousseau M-C, Vedel I. Mixed Methods Appraisal Tool (MMAT), version 2018. Registration of Copyright (#1148552), Canadian Intellectual Property Office, Industry Canada. Accessed at: http://mixedmethodsappraisaltoolpublic.pbworks.com/w/file/fetch/127916259/MMAT_2018_criteria-manual_2018-08-01_ENG.pdf

bias in qualitative data collection methods

A bias specific to the conduct of qualitative research.

The data sources (e.g., archives, documents), the method of data collection (e.g., in depth interviews, group interviews, and/or observations), and the form of the data (e.g., tape recording, video material, diary, photo, and/or field notes) should be adequate and appropriate to address the research question. The term 'bias in qualitative data collection methods' may be supplemental to other terms for types of detection bias or types of selection bias.

bias in qualitative analysis

A bias specific to the analysis of qualitative research.

The analysis approach should be appropriate for the research question and qualitative approach (design). The term 'bias in qualitative analysis' may be supplemental to other terms for types of analysis bias. When interpretation is an integral part of qualitative analysis, bias in the interpretive analysis should use the term 'bias in qualitative analysis' rather than 'cognitive interpretive bias in reporting'.

To participate you can join the Scientific Evidence Code System (SEVCO) Expert Working Group at https://fevir.net/resources/Project/27845.

 

Today the GRADE Ontology Working Group reviewed 11 votes (9-2) and 1 comment for the term Risk of bias. We created 2 additional terms to distinguish Risk of bias across studies from Risk of bias within a study, and we made changes to the comment for application regarding the use of these terms.

 

There are 5 terms open for voting for continued discussion.

 

Please visit the term pages (Risk of bias, Risk of bias across studies, Risk of bias within a study, Inconsistency, Indirectness) and click the Comment button if you would like to share any comments that will be openly viewed by anyone visiting the page.  You may also click the Vote button to anonymously register your agreement or disagreement with this term.  If you vote ‘No’ you need to add a comment (along with your vote, not publicly shared with your name) explaining what change is needed to reach agreement.

 

 

Preferred Term

Definition

Alternative Term

(if any)

Comment For Application

(if any)

Risk of bias

The potential for systematic error in the results or findings of a study or across studies.

 

Related terms used by others for "risk of bias" include ‘‘internal validity problems’, ‘‘study limitations’’, and "methodological limitations". In the definition of risk of bias, the "potential for" covers the likelihood of and the magnitude of. A systematic error is a difference between the reported results (findings, conclusions, or effect estimates) and the actuality (the truth, the estimand, or the true value targeted for estimation). The systematic error may occur at any stage in the conception and design of a study or in the collection, analysis, interpretation, or reporting of data. Risk of bias is one of the domains that can impact the rating of the certainty of evidence. In GRADE, the term 'risk of bias' (most frequently used for 'Risk of bias across studies') is applied to the body of evidence for a single outcome. Best practice may include precisely using the code for 'Risk of bias across studies' in computer applications but the shorter phrase 'Risk of bias' when preferred for readability.

Risk of bias across studies

The potential for systematic error in results or findings across studies.

 

Related terms used by others for "risk of bias" include ‘‘internal validity problems’, ‘‘study limitations’’, and "methodological limitations". In the definition of risk of bias, the "potential for" covers the likelihood of and the magnitude of. A systematic error is a difference between the reported results (findings, conclusions, or effect estimates) and the actuality (the truth, the estimand, or the true value targeted for estimation). The systematic error may occur at any stage in the conception and design of a study or in the collection, analysis, interpretation, or reporting of data. Risk of bias is one of the domains that can impact the rating of the certainty of evidence. In GRADE, the term 'risk of bias' (most frequently used for 'Risk of bias across studies') is applied to the body of evidence for a single outcome. Best practice may include precisely using the code for 'Risk of bias across studies' in computer applications but the shorter phrase 'Risk of bias' when preferred for readability.

Risk of bias within a study

The potential for systematic error in the results or findings from a single study.

 

Related terms used by others for "risk of bias" include ‘‘internal validity problems’, ‘‘study limitations’’, and "methodological limitations". In the definition of risk of bias, the "potential for" covers the likelihood of and the magnitude of. A systematic error is a difference between the reported results (findings, conclusions, or effect estimates) and the actuality (the truth, the estimand, or the true value targeted for estimation). The systematic error may occur at any stage in the conception and design of a study or in the collection, analysis, interpretation, or reporting of data.

The GRADE approach primarily uses the term 'risk of bias' for the concept of 'risk of bias across studies' as one of the domains that can impact the rating of the certainty of evidence. When study-specific risk of bias assessment is reported, best practice may include precisely using the code for 'Risk of bias within a study' in computer applications but the shorter phrase 'Risk of bias' may be used when preferred for readability if the context is clear that it is being applied to studies individually.

Inconsistency

Variations in the findings across studies or analyses that were considered to estimate the effect.

 

Variations may include differences across estimates from different studies or may include differences across estimates from different analyses (such as sensitivity analyses) within studies. Variations may include differences in the magnitude or direction of the findings. Criteria for evaluating inconsistency include (but are not limited to) similarity of point estimates, extent of overlap of confidence intervals, and statistical criteria including tests of heterogeneity, I2 or Tau2. Inconsistency is one of the domains that can impact the rating of the certainty of evidence. Related terms used by others for "inconsistency" include "heterogeneity", "statistical heterogeneity", and "clinical heterogeneity".

Indirectness

Mismatch between the populations, the exposures or interventions, the comparators, or the outcomes measured in the studies or analyses that were considered to estimate the effect and those under consideration in a question of interest.

 

The question of interest may vary with context, such as the key considerations for a guideline or systematic review. Indirectness is one of the domains that can impact the rating of the certainty of evidence. Related terms used by others for "indirectness" include "lacking direct relevance" and "external validity problems".

­­­­­

 

The Research on FHIR Working Group worked on necessary adjustments to the FEvIR Platform.

 

Releases today on the FEvIR Platform:

The Fast Evidence Interoperability Resources (FEvIR) Platform is available for use now, but is “pre-release”.  The current version is 0.192.0 (December 22, 2023). Viewing resources is open without login.  Signing in is free and required to create content (which can then only be edited by the person who created the content).

    • Release 0.192.0 (December 22, 2023) increases efficiency of numerous functions across the FEvIR Platform, such as adding new Resources when using the MEDLINE-to-FEvIR Converter for multiple entries. 

 

 

Quote for thought: “Tonight’s December thirty-first, something is about to burst … Hark, it’s midnight, children dear. Duck! Here comes another year!” —Ogden Nash

 

To get involved or stay informed with the Health Evidence Knowledge Accelerator (HEvKA): HEvKA Project Page on FEvIR Platform, HEvKA Project Page on HL7 Confluence, or join any of the groups that are now meeting in the following weekly schedule:

 

Weekly Meeting Schedule

Day

Time (Eastern)

Team

 

Monday 

8-9 am 

Project Management

 

Monday

9-10 am

Setting the Scientific Record on FHIR WG

 

Monday 

10-11 am

CQL Development WG (a CDS EBMonFHIR sub-WG)

 

Monday 

2-3 pm 

Statistic Terminology WG

 

Tuesday

9-10 am

Measuring the Rate of Scientific Knowledge Transfer WG

 

Tuesday 

2-3 pm 

StatisticsOnFHIR WG (a CDS EBMonFHIR sub-WG)

 

Tuesday 

3-4 pm 

Ontology Management WG

 

Wednesday

8-9 am 

Funding the Ecosystem Infrastructure WG

 

Wednesday

9-10 am

Communications (Awareness, Scholarly Publications) WG

 

Thursday

8-9 am

EBM Implementation Guide WG (a CDS EBMonFHIR sub-WG)

 

Thursday

9-10 am

Computable EBM Tools Development WG

 

Thursday 

4-5 pm

Project Management

 

Friday

9-10 am 

Risk of Bias Terminology WG

 

Friday

10-11 am 

GRADE Ontology WG

 

Friday

12-1 p

Research on FHIR Working Group

 

 

 

 

 

 

To join any of these meetings:

Microsoft Teams meeting

Join on your computer, mobile app or room device

Click here to join the meeting *New Link

Meeting ID: 279 232 517 719
Passcode: 8pCpbF

Download Teams | Join on the web

Or call in (audio only)

+1 929-346-7156,,35579956#   United States, New York City

Phone Conference ID: 355 799 56#

Find a local number

Meeting support by ComputablePublishing.com

 

 

HAPPY HOLIDAYS!!!

Joanne Dehnbostel MS, MPH

Research and Analysis Manager, Computable Publishing LLC

 

A picture containing text

Description automatically generated

 

Making Science Machine-Interpretable
http://computablepublishing.com 

 

The views and opinions included in this email belong to their author and do not necessarily mirror the views and opinions of the company. Our employees are obliged not to make any defamatory clauses, infringe, or authorize infringement of any legal right. Therefore, the company will not take any liability for such statements included in emails. In case of any damages or other liabilities arising, employees are fully responsible for the content of their emails.

Joanne Dehnbostel

unread,
Jan 3, 2024, 5:56:40 PM1/3/24
to Health Evidence Knowledge Accelerator (HEvKA)

6 people (BA, CE, JD, JO, JT, KS) participated today in up to 2 active working group meetings.

Today, the Funding the Ecosystem Infrastructure Working Group discussed potential synergies between the HEvKA efforts and the various Learning Health System Collaborative-related target focused efforts (e.g. sickle cell disease at NASCC, chronic kidney disease at the VA). Areas of interest may include supporting the digital expression of clinical practice guidelines, research study protocols, research study implementation, and the inclusion and exclusion criteria for groups such as the specific targets for clinical decision support interventions.

The Communications Working Group discussed an upcoming 20-minute presentation regarding the Evidence Based Medicine Implementation Guide which will be given at the January 2024 HL7 Working Group Meeting. To improve efficiency, the group also discussed the general workflow for collecting, disseminating, and archiving information about the HEvKA working groups via the Daily and Weekly Updates and multiple websites. 

The HL7 Clinical Decision Support (CDS) Working Group discussed the January ballot for the Evidence Based Medicine Implementation Guide, including how to vote and how to link comments to a vote. Much of the information discussed can be found here: https://confluence.hl7.org/pages/viewpage.action?pageId=19136734#JiraBallotProcess-BallotSubmission.

Today's Releases on the FEvIR Platform:

  • The Fast Evidence Interoperability Resources (FEvIR) Platform is available for use now, but is “pre-release”.  The current version is 0.200.0 (January 3, 2024). Viewing resources is open without login.  Signing in is free and required to create content (which can then only be edited by the person who created the content).
    • Release 0.200.0 (January 3, 2024) limits the View Resources List page display and the FEvIR Search Limit by Resource Type list to valid FHIR Resource types or “Project Resources”, and improves the data entry experience for radio buttons by adding line wrapping when the options exceed the available horizontal space.
  • Computable Publishing®: Comparative Evidence Report Authoring Tool version 0.10.0 (January 3, 2024) creates and displays a Composition Resource with a ComparativeEvidenceReport Profile.
    • Release 0.10.0 (January 3, 2024) creates referenced Resources if changes are saved without pre-existing resource references.

 

Quote for Thought: "There are no limits. There are only plateaus, and you must not stay there-you must go beyond them"–Bruce Lee

 

To get involved or stay informed: HEvKA Project Page on FEvIR PlatformHEvKA Project Page on HL7 Confluence, or join any of the groups that are now meeting in the following weekly schedule:

Weekly Meeting Schedule and Link:

 

Wednesday

9-10 am 

Communications(Awareness, Scholarly Publications) WG

Thursday

8-9 am

EBM Implementation Guide WG (a CDS EBMonFHIR sub-WG)

Thursday

9-10 am

Computable EBM Tools Development WG

Thursday 

4-5 pm 

Project Management

Friday

9-10 am 

Risk of Bias Terminology WG

Friday

10-11 am 

GRADE Ontology WG

Friday

12-1 pm

ResearchOnFHIR WG

 

To join any of these meetings:

________________________________________________________________________________

Microsoft Teams meeting

Join on your computer, mobile app or room device

Meeting ID: 279 232 517 719
Passcode: 8pCpbF

Download Teams | Join on the web

Or call in (audio only)

+1 929-346-7156,,35579956#   United States, New York City

Phone Conference ID: 355 799 56#

Find a local number

Meeting support by ComputablePublishing.com

________________________________________________________________________________

 

 

 

 

 

Joanne Dehnbostel MS, MPH

Research and Analysis Manager, Computable Publishing LLC

 

A picture containing text

Description automatically generated

 

Making Science Machine-Interpretable
http://computablepublishing.com 

 

Joanne Dehnbostel

unread,
Jan 4, 2024, 10:10:33 PM1/4/24
to Health Evidence Knowledge Accelerator (HEvKA)

 

 

5 people (BA, CE, IK, JD, KS) participated today in up to 3 active working group meetings.

Today the EBM Implementation Guide Working Group (HL7 CDS EBMonFHIR sub-WG) worked to prepare for a future effort to include the EBMonFHIR IG CodeSystem in the HL7 terminology. We first need to establish how to manage changes to HL7 terminologies and we are starting with the following Jira item: https://jira.hl7.org/browse/UP-427 . The group also set up a project page to follow the progress of this terminology coordination https://fevir.net/resources/Project/183616.

The Computable EBM Tools Development Working Group reviewed aliases, value sets, and our approach to PICO based searching for Compositions on the FEvIR platform and made specific plans for what to test during the Connectathon. This work will continue over the next two weeks and will be tested at the HL7 Connectathon (Jan 16-18).

The HL7 BRR Work Group continued describing how to use the FHIR Group Resource to Express Eligibility Criteria. The page can be viewed here: Expressing Eligibility Criteria using the Group (R6) Resource - Reference Page. The group also created two Jira tickets to correct issues with the R6 version of Group Resource https://jira.hl7.org/browse/FHIR-43499 and https://jira.hl7.org/browse/FHIR-43501

The Project Management Working Group moved the Project Management meeting from Thursday afternoon to Friday morning at 8 am Eastern time and prepared the HEvKA weekly agenda for the next 8 days:

Suggested agenda for the week:

Friday, January 5:

      • 8 am Eastern: Project Management – Prepare for today’s meetings
      • 9 am Eastern: Risk of Bias Terminology Working Group – review SEVCO terms-biases in qualitative studies
      • 10 am Eastern: GRADE Ontology Working Group - term development, introduction for new participants, review results of vote on risk of bias terms, inconsistency and indirectness.
      • 12 pm Eastern: ResearchOnFHIR Working Group - review objectives and priorities (will this meeting continue in 2024?)

Monday, January 8 :  

      • 8 am Eastern: Project Management - Connectathon preparation, HL7 terminology coordination
      • 9 am Eastern: Setting the Scientific Record on FHIR Working Group -  mapping GRADEpro to FHIR 
      • 10 am Eastern: CQL Development Working Group (HL7 CDS EBMonFHIR sub-WG) - create project page and list of examples 
      • 2 pm Eastern: Statistic Terminology Working Group-review measures of calibration and dispersion 
      • 3 pm Eastern: HL7 Learning Health Systems (LHS) WG- Connectathon preparation/coordination

Tuesday, January 9:

      • 9 am Eastern: Measuring the Rate of Scientific Knowledge Transfer- review changes for overall project support
      • 2 pm Eastern: StatisticsOnFHIR Working Group (HL7 CDS EBMonFHIR sub-WG) – continue to work on ResearchStudy example
      • 3 pm Eastern: Ontology Management Working Group – review objectives and priorities, discuss coordination with HL7 terminology
      • 4 pm Eastern: HL7 BRR agenda may include ResearchStudy Builder/Viewer use

Wednesday, January 10

      • 8 am Eastern: Funding the Knowledge Ecosystem Infrastructure Working Group – work on "big picture" portion of the EBM-IG presentation for January HL7 Working Group Meeting 
      • 9 am Eastern: Communications Working Group –continue work on presentation of EBM IG for HL7 working group meeting and other publications, presentations, and website developments

Thursday, January 11:

      • 8 am Eastern: EBM Implementation Guide Working Group (HL7 CDS EBMonFHIR sub-WG) – continue work on preparing changes for HL7 terminology
      • 9 am Eastern: Computable EBM Tools Development Working Group – review Aliases, Value Sets, and approach to searching for Compositions (PICO search support)
      • 12 pm Eastern: HL7 BRR Work Group - review Eligibility Criteria in Group Resource

Friday, January 12:

      • 8 am Eastern: Project Management - * New time! prepare weekly agenda
      • 9 am Eastern: Risk of Bias Terminology Working Group – review SEVCO terms
      • 10 am Eastern: GRADE Ontology Working Group – term development, introduction for new participants
      • 12 pm Eastern: ResearchOnFHIR Working Group - review objectives and priorities

Releases Today on the FEvIR Platform:

  • The Fast Evidence Interoperability Resources (FEvIR) Platform is available for use now, but is “pre-release”.  The current version is 0.201.0 (January 4, 2024). Viewing resources is open without login.  Signing in is free and required to create content (which can then only be edited by the person who created the content).
    • Release 0.201.0 (January 4, 2024) adds a PICO Search interface to the FEvIR Search page.
  • The Computable Publishing®: Comparative Evidence Report Authoring Tool version 0.11.0 (January 4, 2024) creates and displays a Composition Resource with a ComparativeEvidenceReport Profile.
    • Release 0.11.0 (January 4, 2024) disables the "Save Changes to … Resource" buttons when no changes are present and emphasizes the buttons with green color when changes are made.

Quote for Thought: "Most people spend more energy going around problems than in trying to solve them"–Henry Ford

Friday 

8-9 am*new time! 

Project Management

Joanne Dehnbostel

unread,
Jan 6, 2024, 1:40:17 AM1/6/24
to Health Evidence Knowledge Accelerator (HEvKA)

 

 

19 people (AI, BA, BS, CA-D, CM, HL, IK , IM, JB, JD, JT, KP, KS, KW, LW, PW, SL, SS, TD) participated today in up to 4 active working group meetings.

Today the Project Management Working Group prepared for today’s meetings.

The Risk of Bias Working Group found 3 terms with negative votes which were discussed, reworked and resubmitted for vote ( bias in qualitative research design, bias in qualitative data collection methods, bias in qualitative analysis ), one term that requires more votes for approval (incoherence among data, analysis, and interpretation) and one additional term was defined today (mixed methods research bias) so there are currently 5 risk of bias terms open for vote.

 

Term

Definition

Alternative Terms
(if any)

Comment for application
(if any)

A qualitative research bias in which the qualitative approach used in a study is not appropriate for the research question and problem.

Bias is defined as a systematic distortion in research results (estimation of effect, association, or inference). Distortions in research results means differences between the reported results (findings, conclusions, effect estimates) and the actuality (the truth, the estimand [the quantity targeted for estimation]). In qualitative research, the actuality may include multiple meanings that individuals or groups assign to concepts, and there is no quantitative estimand.

The qualitative approach used in a study should be appropriate for the research question and problem.

Common qualitative research approaches include (this list is not exhaustive):

Ethnography - The aim of the study is to describe and interpret the shared cultural behavior of a group of individuals. Phenomenology - The study focuses on the subjective experiences and interpretations of a phenomenon encountered by individuals. Narrative research - The study analyzes life experiences of an individual or a group. Grounded theory - Generation of theory from data in the process of conducting research (data collection occurs first). Case study - In-depth exploration and/or explanation of issues intrinsic to a particular case. A case can be anything from a decision-making process, to a person, an organization, or a country. Qualitative description - There is no specific methodology, but a qualitative data collection and analysis, e.g., in-depth interviews or focus groups, and hybrid thematic analysis (inductive and deductive).

Reference - Hong QN, Pluye P, Fàbregues S, Bartlett G, Boardman F, Cargo M, Dagenais P, Gagnon M-P, Griffiths F, Nicolau B, O’Cathain A, Rousseau M-C, Vedel I. Mixed Methods Appraisal Tool (MMAT), version 2018. Registration of Copyright (#1148552), Canadian Intellectual Property Office, Industry Canada. Accessed at: http://mixedmethodsappraisaltoolpublic.pbworks.com/w/file/fetch/127916259/MMAT_2018_criteria-manual_2018-08-01_ENG.pdf

A qualitative research bias in which the data sources, the methods of data collection, and the forms of data are not adequate or appropriate to address the research question.

Bias is defined as a systematic distortion in research results (estimation of effect, association, or inference). Distortions in research results means differences between the reported results (findings, conclusions, effect estimates) and the actuality (the truth, the estimand [the quantity targeted for estimation]). In qualitative research, the actuality may include multiple meanings that individuals or groups assign to concepts, and there is no quantitative estimand.

The data sources (e.g., archives, documents), the methods of data collection (e.g., in depth interviews, group interviews, and/or observations), and the forms of the data (e.g., tape recording, video material, diary, photo, and/or field notes) should be adequate and appropriate to address the research question. The term 'bias in qualitative data collection methods' may be supplemental to other terms for types of detection bias or types of selection bias.

bias in qualitative analysis

A qualitative research bias in which the analysis approach is not appropriate for the research question and qualitative approach.

Bias is defined as a systematic distortion in research results (estimation of effect, association, or inference). Distortions in research results means differences between the reported results (findings, conclusions, effect estimates) and the actuality (the truth, the estimand [the quantity targeted for estimation]). In qualitative research, the actuality may include multiple meanings that individuals or groups assign to concepts, and there is no quantitative estimand.

The analysis approach should be appropriate for the research question and qualitative approach (design). The term 'bias in qualitative analysis' may be supplemental to other terms for types of analysis bias. When interpretation is an integral part of qualitative analysis, bias in the interpretive analysis should use the term 'bias in qualitative analysis' rather than 'cognitive interpretive bias in reporting'.

There are one or more mismatches among hypothesis, data collected, data analysis, and results interpretation in the study report.

The term mismatch applies to an inappropriate or wrong or inadequate relationship.

mixed methods research bias

A bias specific to the coordination of design, conduct, analysis or reporting of qualitative research and quantitative research within the same research project.

Mixed methods research is a research approach that combines both qualitative and quantitative research methods within a single study or research project. This methodology aims to provide a more comprehensive understanding of a research problem by integrating the strengths of both qualitative and quantitative research. Examples of mixed methods research include combining surveys with in-depth interviews, using quantitative data to identify patterns and trends followed by qualitative data to explore the underlying reasons and meanings, or incorporating qualitative findings to help interpret and validate quantitative results. Overall, mixed methods research provides a more holistic understanding of a research question by acknowledging and leveraging the strengths of both qualitative and quantitative approaches.

 

To participate you can join the Scientific Evidence Code System (SEVCO) Expert Working Group at https://fevir.net/resources/Project/27845.

 

The GRADE Ontology Working Group discussed opposing votes and comments regarding 3 risk of bias terms and changed the definitions and comments for application accordingly (Risk of bias, Risk of bias across studies, Risk of bias within a study) these terms are now available for re-vote. Two additional terms (Inconsistency, Indirectness ) were not discussed today and are still open for voting. This leaves 5 terms open for voting and continued discussion. We anticipate a discussion of the terms, Inconsistency and Indirectness next week. 

 

The group also discussed translation of the GRADE terms into several languages including French, Spanish, Portuguese, and Norwegian. This work will take place during the Ontology Management Working Group meeting time on Tuesday afternoons from 3 to 4 pm Eastern time. Today’s meeting was recorded, and you can  view the video here. 

 

Please visit the term pages via the links in the table below and click the Comment button if you would like to share any comments that will be openly viewed by anyone visiting the page.  You may also click the Vote button to anonymously register your agreement or disagreement with this term.  If you vote ‘No’ you need to add a comment (along with your vote, not publicly shared with your name) explaining what change is needed to reach agreement.

Term

Definition

Alternative Terms
(if any)

Comment for application
(if any)

Risk of bias

The potential for systematic error in the results or findings of a study or across studies.

Related terms used by others for "risk of bias" include ‘‘internal validity problems’, ‘‘study limitations’’, and "methodological limitations". In the definition of risk of bias, the "potential for" covers the likelihood of and the magnitude of. A systematic error is a difference between the reported results (findings, conclusions, or effect estimates) and the actuality (the truth, the estimand, or the true value targeted for estimation). The systematic error may occur at any stage in the conception and design of a study or in the collection, analysis, interpretation, or reporting of data. Risk of bias is one of the domains that can impact the rating of the certainty of evidence. In GRADE, the term 'risk of bias' (most frequently used for 'Risk of bias across studies') is applied to the body of evidence for a single outcome. Best practice may include precisely using the code for 'Risk of bias across studies' in computer applications but the shorter phrase 'Risk of bias' when preferred for readability. For computer applications, the term 'Risk of bias' is not encouraged as it is more preferable to use either 'Risk of bias across studies' or 'Risk of bias within a study' to more precisely represent the context.

Risk of bias across studies

The potential for systematic error in results or findings across studies.

Related terms used by others for "risk of bias" include ‘‘internal validity problems’, ‘‘study limitations’’, and "methodological limitations". In the definition of risk of bias, the "potential for" covers the likelihood of and the magnitude of. A systematic error is a difference between the reported results (findings, conclusions, or effect estimates) and the actuality (the truth, the estimand, or the true value targeted for estimation). The systematic error may occur at any stage in the conception and design of a study or in the collection, analysis, interpretation, or reporting of data. Risk of bias is one of the domains that can impact the rating of the certainty of evidence. In GRADE, the term 'risk of bias' (most frequently used for 'Risk of bias across studies') is applied to the body of evidence for a single outcome. Best practice may include precisely using the code for 'Risk of bias across studies' in computer applications but the shorter phrase 'Risk of bias' when preferred for readability.

Risk of bias within a study

The potential for systematic error in the results or findings from a single study.

Related terms used by others for "risk of bias" include ‘‘internal validity problems’, ‘‘study limitations’’, and "methodological limitations". In the definition of risk of bias, the "potential for" covers the likelihood of and the magnitude of. A systematic error is a difference between the reported results (findings, conclusions, or effect estimates) and the actuality (the truth, the estimand, or the true value targeted for estimation). The systematic error may occur at any stage in the conception and design of a study or in the collection, analysis, interpretation, or reporting of data.

Variations in the findings across studies or analyses that were considered to estimate the effect.

Variations may include differences across estimates from different studies or may include differences across estimates from different analyses (such as sensitivity analyses) within studies. Variations may include differences in the magnitude or direction of the findings. Criteria for evaluating inconsistency include (but are not limited to) similarity of point estimates, extent of overlap of confidence intervals, and statistical criteria including tests of heterogeneity, I2 or Tau2. Inconsistency is one of the domains that can impact the rating of the certainty of evidence. Related terms used by others for "inconsistency" include "heterogeneity", "statistical heterogeneity", and "clinical heterogeneity".

Indirectness

Mismatch between the populations, the exposures or interventions, the comparators, or the outcomes measured in the studies or analyses that were considered to estimate the effect and those under consideration in a question of interest.

The question of interest may vary with context, such as the key considerations for a guideline or systematic review. Indirectness is one of the domains that can impact the rating of the certainty of evidence. Related terms used by others for "indirectness" include "lacking direct relevance" and "external validity problems".

 

The ResearchOnFHIR Working Group voted unanimously to retire this meeting to concentrate on other aspects of the HEvKA Project. As a result, the Project Management Working Group Meeting is moving again and will meet at this time (12 pm Eastern on Friday).

 

Releases today on the FEvIR platform:

Computable Publishing®: Comparative Evidence Report Authoring Tool version 0.12.0 (January 5, 2024) creates and displays a Composition Resource with a ComparativeEvidenceReport Profile.

 Release 0.12.0 (January 5, 2024) adds a relatedArtifact (with ‘derived-from’ type) with a resourceReference (to the ResearchStudy) to all the Evidence Resources created.

 

 

Quote for Thought: “Procrastination is my sin. It brings me naught but sorrow. I know that I should stop it. In fact, I will--tomorrow”--Gloria Pitzer

Friday

9-10 am 

Risk of Bias Terminology WG

Friday

10-11 am 

GRADE Ontology WG

Friday

12-1 pm

Project Management WG

 

To join any of these meetings:

________________________________________________________________________________

Microsoft Teams meeting

Join on your computer, mobile app or room device

Click here to join the meeting *New Link!

Meeting ID: 279 232 517 719
Passcode: 8pCpbF

Download Teams | Join on the web

Or call in (audio only)

+1 929-346-7156,,35579956#   United States, New York City

Phone Conference ID: 355 799 56#

Find a local number

Meeting support by ComputablePublishing.com

________________________________________________________________________________

 

 

 

 

 

Joanne Dehnbostel MS, MPH

Research and Analysis Manager, Computable Publishing LLC

 

A picture containing text

Description automatically generated

 

Making Science Machine-Interpretable
http://computablepublishing.com 

 

Joanne Dehnbostel

unread,
Jan 8, 2024, 7:21:58 PM1/8/24
to Health Evidence Knowledge Accelerator (HEvKA)

 

6 people (CE , HL, JD, JT, KS, KW) participated today in up to 4 active working group meetings.

Today the Project Management Working Group discussed preparation for this week's meetings, Connectathon preparation, and HL7 terminology coordination.

The Setting the Scientific Record on FHIR Working Group took a deep dive into the GRADEpro app to understand the best way of mapping GRADEpro to FHIR

The CQL Development Working Group (HL7 CDS EBMonFHIR sub-WG) created a project page for the working group and discussed representative examples to be created in CQL as a reference guide. 

The Statistic Terminology Working Group-reviewed the progress of voting on the 7 terms that have been out for vote and found that although none have opposing votes, they do not have enough votes to be approved. The group then discussed how we might improve participation and then defined one additional term (measure of discrimination) so there are currently 8 terms open for vote.

Term

Definition

Alternative Terms

Comment for application

calibration intercept

A measure of calibration that is the difference between the mean expected value and the mean observed value.

calibration-in-the-large

The calibration intercept is computed from a statistical model for calibration of binary variables (0 or 1) where the log odds of the predicted probabilities is a linear function of the empirical frequencies. The notion of calibration in the large is that the intercept is a gross assessment of whether the average prediction matches the average outcome, however, this interpretation is exquisitely sensitive to the choice of referent factors within the prediction model.

calibration slope

A measure of calibration computed from the rate of change in observed value per unit change in the predicted value.

calibration-in-the small

The calibration slope is computed from a statistical model for calibration of binary variables (0 or 1) where the log odds of the predicted probabilities is a linear function of the empirical frequencies. 1 represents a perfect calibration, 0 represents lack of correlation. Inspection of the graph allows identification of areas of under and over confidence.

mean calibration

A measure of calibration that is the average of a function of the difference between the expected values and the observed values.

For predictive modeling of non-continuous variables, the mean calibration is a measure of calibration that is the average of a function of the difference between the expected probabilities and the observed frequencies. The expected values may be computed (as in predictive models) or may be derived from reference data (as typical for a measurement device). When the function is the square of the difference and the variables are binary (0 or 1), the measure is called the Brier score.

standard error of the mean

A measure of dispersion applied to means across hypothetical repeated random samples.

SEM

A standard error is used to quantify the uncertainty around a statistical estimate due to random sampling error. The standard error of the mean is calculated by dividing the sample standard deviation (STATO:0000237) by the square root of n, the size (number of observations) of the sample.

Standard error of the proportion

A measure of dispersion applied to proportions across hypothetical repeated random samples.

<See standard error of the proportion for the comment for application which includes the formula.>

standard error of the difference between independent means

A measure of dispersion applied to differences between means of independent groups across hypothetical repeated random samples.

<See standard error of the difference between independent means for the comment for application which includes the formula.>

standard error of the difference between independent proportions

A measure of dispersion applied to differences between proportions arising from independent groups across hypothetical repeated random samples.

<See standard error of the difference between independent proportions for the comment for application which includes the formula.>

measure of discrimination

A statistic that quantifies the degree to which a classifier can distinguish among two or more groups..

A classifier is a rule, formula, algorithm, or procedure used to label an instance based on its attributes.

To participate you can join the Scientific Evidence Code System (SEVCO) Expert Working Group at https://fevir.net/resources/Project/27845.

The HL7 Learning Health Systems (LHS) working group included Connectathon preparation/coordination for the EBM Connectathon Track in today's agenda. 

Releases today on the FEvIR Platform: 

The Fast Evidence Interoperability Resources (FEvIR) Platform is available for use now, but is “pre-release”.  The current version is 0.201.1 (January 8, 2024). Viewing resources is open without login.  Signing in is free and required to create content (which can then only be edited by the person who created the content).

Release  0.201.1 (January 8, 2024) fixed a bug that prevented people from joining a Voting Group.

Computable Publishing®: ClinicalTrials.gov-to-FEvIR Converter version 4.10.1 (January 8, 2024) converts ClinicalTrials.gov JSON to FEvIR Resources in FHIR JSON.

Release 4.10.0 (January 8, 2024) The Eligibility Criteria now uses a Group resource instead of EvidenceVariable to reflect recent changes in the FHIR standard.

Quote for Thought: “Everything that can be counted does not necessarily count; everything that counts cannot necessarily be counted.” -Albert Einstein

Joanne Dehnbostel

unread,
Jan 9, 2024, 5:33:03 PM1/9/24
to Health Evidence Knowledge Accelerator (HEvKA)

 

 

10 people (BA, CE , CM, HL, JD, KR, KS, KW, MA, RC) participated today in up to 3 active working group meetings.

Today the Measuring the Rate of Scientific Knowledge Transfer Working Group briefly discussed ongoing changes to the user interface on the FEvIR platform for overall project support. Additional topics discussed included making a plan for study design and calculating a required sample size for the project. Using two different online calculators we arrived at a required sample size of 16 or 17 respectively. We discussed using one month as the effect size and using a Cox proportional hazards model to analyze the rate of scientific knowledge transfer.

The StatisticsOnFHIR Working Group (HL7 CDS EBMonFHIR sub-WG) continued to review the recently developed FEvIR®: ResearchStudy Builder/Viewer and entered additional data for an actual research study.

The Ontology Management Working Group discussed coordination with HL7 terminology and added translations for the terms "Risk of Bias" and "Certainty of Evidence" to the GRADE ontology in Portuguese, noting that there are some user interface adjustments needed to the software for adding term translations. 

The HL7® Biomedical Research and Regulation (BRR) Work Group discussed  proposed changes to the ResearchSubject Resource https://confluence.hl7.org/display/BRR/ResearchSubject+Resource   and approved changes on FHIR tracker FHIR-43561.

Quote for Thought:  "A ship in harbor is safe, but that is not what ships are built for."--John Shedd

Friday

9-10 am 

Risk of Bias Terminology WG

Friday

10-11 am 

GRADE Ontology WG

Friday

12-1 pm

Project Management

 

To join any of these meetings:

________________________________________________________________________________

Microsoft Teams meeting

Join on your computer, mobile app or room device

Click here to join the meeting *New Link!

Meeting ID: 279 232 517 719
Passcode: 8pCpbF

Download Teams | Join on the web

Or call in (audio only)

+1 929-346-7156,,35579956#   United States, New York City

Phone Conference ID: 355 799 56#

Find a local number

Meeting support by ComputablePublishing.com

________________________________________________________________________________

 

 

 

 

 

Joanne Dehnbostel MS, MPH

Research and Analysis Manager, Computable Publishing LLC

 

A picture containing text

Description automatically generated

 

Making Science Machine-Interpretable
http://computablepublishing.com 

 

Joanne Dehnbostel

unread,
Jan 10, 2024, 1:14:00 PM1/10/24
to Health Evidence Knowledge Accelerator (HEvKA)

 

 

5 people (CE , JD, JO, KS, KW) participated today in up to 2 active working group meetings.

Today the Funding the Knowledge Ecosystem Infrastructure Working Group discussed the current activities within the Accelerating Care Transformation (ACT) group including the Chronic Kidney Disease Learning Community.

The Communications Working Group discussed a presentation of the Evidence Based Medicine Implementation Guide (EBM IG) for the upcoming HL7 working group meeting and how to communicate the need for additional statisticians to join our SEVCO Statistics Terminology Working Group both as meeting attendees/term editors and as voting members.

The HL7 Clinical Decision Support Working Group met briefly and approved one hook proposal https://github.com/cds-hooks/docs/pulls. There will be no CDS meeting next week to allow participation in the HL7 Connectathon. 

Quote for Thought: “Success usually comes to those who are too busy to be looking for it.” – Henry David Thoreau

Joanne Dehnbostel

unread,
Jan 12, 2024, 2:20:01 AM1/12/24
to Health Evidence Knowledge Accelerator (HEvKA)

 

 

5 people (CE, GL, IK, JD, KS) participated today in up to 2 active working group meetings.

Today the EBM Implementation Guide Working Group (HL7 CDS EBMonFHIR sub-WG) discussed preparations for the 2024 - 01 Evidence Based Medicine Track of the HL7 FHIR Connectathon which will take place January 16-18, 2024.

The Computable EBM Tools Development Working Group continued to review our approach to searching for Composition resources on the FEvIR platform (PICO search support). We then discussed preparations for the HL7 Connectathon.

The HL7 Biomedical Research and Regulation (BRR) Work Group continued to review Eligibility Criteria using the FHIR Group Resource https://confluence.hl7.org/display/BRR/Expressing+Eligibility+Criteria+using+the+Group+%28R6%29+Resource+-+Reference+Page

Quote for Thought: "Let your joy be in your journey - not in some distant goal"-Tim Cook

Joanne Dehnbostel

unread,
Jan 13, 2024, 4:53:12 AM1/13/24
to Health Evidence Knowledge Accelerator (HEvKA)

 

 

16 people (BA, CA-D, CM, HK, HL, HS, JB, JD, KS, KW, MA, PW, SL, SM, SS, TD) participated today in up to 3 active working group meetings.

Today the Risk of Bias Terminology Working Group found that none of the terms open for vote since last week had received enough votes to approve them for the code system so they are still open for vote. The group then discussed and defined one additional term (inappropriate interpretation of integration of qualitative and quantitative findings) so there are currently 6 terms open for vote. Today's meeting was recorded and  the recording is available here

 

Term

Definition

Alternative Terms
(if any)

Comment for application
(if any)

bias in qualitative research design

A qualitative research bias in which the qualitative approach used in a study is not appropriate for the research question and problem.

Bias is defined as a systematic distortion in research results (estimation of effect, association, or inference). Distortions in research results means differences between the reported results (findings, conclusions, effect estimates) and the actuality (the truth, the estimand [the quantity targeted for estimation]). In qualitative research, the actuality may include multiple meanings that individuals or groups assign to concepts, and there is no quantitative estimand.

The qualitative approach used in a study should be appropriate for the research question and problem.

Common qualitative research approaches include (this list is not exhaustive):

Ethnography - The aim of the study is to describe and interpret the shared cultural behavior of a group of individuals. Phenomenology - The study focuses on the subjective experiences and interpretations of a phenomenon encountered by individuals. Narrative research - The study analyzes life experiences of an individual or a group. Grounded theory - Generation of theory from data in the process of conducting research (data collection occurs first). Case study - In-depth exploration and/or explanation of issues intrinsic to a particular case. A case can be anything from a decision-making process, to a person, an organization, or a country. Qualitative description - There is no specific methodology, but a qualitative data collection and analysis, e.g., in-depth interviews or focus groups, and hybrid thematic analysis (inductive and deductive).

Reference - Hong QN, Pluye P, Fàbregues S, Bartlett G, Boardman F, Cargo M, Dagenais P, Gagnon M-P, Griffiths F, Nicolau B, O’Cathain A, Rousseau M-C, Vedel I. Mixed Methods Appraisal Tool (MMAT), version 2018. Registration of Copyright (#1148552), Canadian Intellectual Property Office, Industry Canada. Accessed at: http://mixedmethodsappraisaltoolpublic.pbworks.com/w/file/fetch/127916259/MMAT_2018_criteria-manual_2018-08-01_ENG.pdf

A qualitative research bias in which the data sources, the methods of data collection, and the forms of data are not adequate or appropriate to address the research question.

Bias is defined as a systematic distortion in research results (estimation of effect, association, or inference). Distortions in research results means differences between the reported results (findings, conclusions, effect estimates) and the actuality (the truth, the estimand [the quantity targeted for estimation]). In qualitative research, the actuality may include multiple meanings that individuals or groups assign to concepts, and there is no quantitative estimand.

The data sources (e.g., archives, documents), the methods of data collection (e.g., in depth interviews, group interviews, and/or observations), and the forms of the data (e.g., tape recording, video material, diary, photo, and/or field notes) should be adequate and appropriate to address the research question. The term 'bias in qualitative data collection methods' may be supplemental to other terms for types of detection bias or types of selection bias.

bias in qualitative analysis

A qualitative research bias in which the analysis approach is not appropriate for the research question and qualitative approach.

Bias is defined as a systematic distortion in research results (estimation of effect, association, or inference). Distortions in research results means differences between the reported results (findings, conclusions, effect estimates) and the actuality (the truth, the estimand [the quantity targeted for estimation]). In qualitative research, the actuality may include multiple meanings that individuals or groups assign to concepts, and there is no quantitative estimand.

The analysis approach should be appropriate for the research question and qualitative approach (design). The term 'bias in qualitative analysis' may be supplemental to other terms for types of analysis bias. When interpretation is an integral part of qualitative analysis, bias in the interpretive analysis should use the term 'bias in qualitative analysis' rather than 'cognitive interpretive bias in reporting'.

incoherence among data, analysis, and interpretation

There are one or more mismatches among hypothesis, data collected, data analysis, and results interpretation in the study report.

The term mismatch applies to an inappropriate or wrong or inadequate relationship.

A bias specific to the coordination of design, conduct, analysis or reporting of qualitative research and quantitative research within the same research project.

Mixed methods research is a research approach that combines both qualitative and quantitative research methods within a single study or research project. This methodology aims to provide a more comprehensive understanding of a research problem by integrating the strengths of both qualitative and quantitative research. Examples of mixed methods research include combining surveys with in-depth interviews, using quantitative data to identify patterns and trends followed by qualitative data to explore the underlying reasons and meanings, or incorporating qualitative findings to help interpret and validate quantitative results. Overall, mixed methods research provides a more holistic understanding of a research question by acknowledging and leveraging the strengths of both qualitative and quantitative approaches.

inappropriate interpretation of integration of qualitative and quantitative findings

A mixed methods research bias in which the process of combining the results of the constituent studies is flawed.

This criterion is related to meta-inference, which is defined as the overall interpretations derived from integrating qualitative and quantitative findings (Teddlie and Tashakkori, 2009). Meta-inference occurs during the interpretation of the findings from the integration of the qualitative and quantitative components, and shows the added value of conducting a mixed methods study rather than having two separate studies. (Pluye et al 2018)

 

To participate you can join the Scientific Evidence Code System (SEVCO) Expert Working Group at https://fevir.net/resources/Project/27845.

 

The GRADE Ontology Working Group found that all 5 of the terms open for voting last week (Risk of Bias, Risk of bias across studies, Risk of bias within a study, Inconsistency, and Indirectness) received opposing votes. The group worked to revise the definition and comment for application for the term Risk of Bias and this term is once again open for voting. The remaining 4 terms will be revised in subsequent meetings. There is currently 1 term open for voting. The meeting was recorded and the meeting recording is available here.  The group agreed that our collective homework will be to come to next week's meeting prepared to discuss how to disambiguate between the terms Inconsistency and Incoherence, and to provide some GRADE references to help with these definitions. 

Term

Definition

Alternative Terms
(if any)

Comment for application
(if any)

Risk of bias

The potential for systematic error in the results or findings of a study or across studies due to limitations in study design and execution.

Related terms used by others for "risk of bias" include ‘‘internal validity problems’, ‘‘study limitations’’, and "methodological limitations". In the definition of risk of bias, the "potential for" covers the likelihood of and the magnitude of. A systematic error is a difference between the reported results (findings, conclusions, or effect estimates) and the actuality (the truth, the estimand, or the true value targeted for estimation). The systematic error may occur at any stage in the conception and design of a study or in the collection, analysis, interpretation, or reporting of data. Risk of bias is one of the domains that can impact the rating of the certainty of evidence. In GRADE, the term 'risk of bias' (most frequently used for 'Risk of bias across studies') is applied to the body of evidence for a single outcome. Best practice may include precisely using the code for 'Risk of bias across studies' in computer applications but the shorter phrase 'Risk of bias' when preferred for readability. For computer applications, the term 'Risk of bias' is not encouraged as it is more preferable to use either 'Risk of bias across studies' or 'Risk of bias within a study' to more precisely represent the context.

 

 

The Project Management Working Group created an agenda for the following week:

Many Meetings will be cancelled next week due to the Martin Luther King Federal Holiday in the US,  and the HL7 Connectathon

 

      • Monday, January 15 :  
        • 8 am Eastern: Project Management - Meeting cancelled for the observance of Martin Luther King Day!
        • 9 am Eastern: Setting the Scientific Record on FHIR Working Group - Meeting cancelled for the observance of Martin Luther King Day!
        • 10 am Eastern: CQL Development Working Group (HL7 CDS EBMonFHIR sub-WG) - Meeting cancelled for the observance of Martin Luther King Day!
        • 2 pm Eastern: Statistic Terminology Working Group-Meeting cancelled for the observance of Martin Luther King Day!
        • 3 pm Eastern: HL7 Learning Health Systems (LHS) WG- Meeting cancelled for the observance of Martin Luther King Day!

Tuesday, January 9:

        • 9 am Eastern: Measuring the Rate of Scientific Knowledge Transfer- Meeting Cancelled to allow participation in the HL7 Connectathon!
        • 2 pm Eastern: StatisticsOnFHIR Working Group (HL7 CDS EBMonFHIR sub-WG) – Meeting Cancelled to allow participation in the HL7 Connectathon!
        • 3 pm Eastern: Ontology Management Working Group – Meeting Cancelled to allow participation in the HL7 Connectathon!
        • 4 pm Eastern: HL7 BRR Working Group Meeting Cancelled to allow participation in the HL7 Connectathon!

Wednesday, January 10

        • 8 am Eastern: Funding the Knowledge Ecosystem Infrastructure Working Group –Meeting Cancelled to allow participation in the HL7 Connectathon!
        • 9 am Eastern: Communications Working Group –Meeting Cancelled to allow participation in the HL7 Connectathon!

Thursday, January 11:

        • 8 am Eastern: EBM Implementation Guide Working Group (HL7 CDS EBMonFHIR sub-WG) – Meeting Cancelled to allow participation in the HL7 Connectathon!
        • 9 am Eastern: Computable EBM Tools Development Working Group – Meeting Cancelled to allow participation in the HL7 Connectathon!
        • 12 pm Eastern: HL7 BRR Work Group - Meeting Cancelled to allow participation in the HL7 Connectathon!

Friday, January 12:

        • 9 am Eastern: Risk of Bias Terminology Working Group – review SEVCO terms, term development.
        • 10 am Eastern: GRADE Ontology Working Group – term development -  (Inconsistency, Incoherence, Risk of Bias) - introduction for new participants if needed
        • 12 pm Eastern: Project Management - Prepare weekly agenda for the next week.

 

Quote for Thought:    "I think I can. I think I can. I think I can. I know I can."–Watty Piper, The Little Engine That Could

Joanne Dehnbostel

unread,
Jan 16, 2024, 4:38:11 PM1/16/24
to Health Evidence Knowledge Accelerator (HEvKA)

There were no HEvKA working group meetings today to allow for participation in the HL7 January 2024 Connectathon. Here is a report of our activities in the Evidence Based Medicine (EBM) Track at that meeting:

HL7 Connectathon Day 1-January 16, 2024

Attendance:

Brian Alper

Khalid Shahin

Joanne Dehnbostel

Ilkka Kunnamo

Bruce Bray

Brian Kaney

Liz Turi

Renato Calamai

Teresa Younkin

Sharon Hibay

Tanesha Lindsay

 

Goal 1 – Introduce EBMonFHIR IG


Summary: We introduced the Evidence Based Medicine Implementation Guide to representatives from multiple organizations. We demonstrated using the FEvIR Platform to create a computable guideline using FHIR Resources and discussed the ability to create an organization specific profile of the Guideline Profile for a professional society such as the American College of Cardiology. We demonstrated how we could convert Medline citations or RIS based citations to FHIR Citation Resources. 
We also introduced a representative from the HL7 DaVinci Accelerator to creating a cohort definition in FHIR (specifically the CohortDefinition Profile of Group Resource) and discussed examples for prior authorization and for risk adjustment (e.g., by defining a group of hospitals within a specific geographic area). 

 

Goal 2 – Demonstrate Population/Intervention searches for FHIR Compositions (using FHIR ArtifactAssessment - Classification Profile)

Bonus point – coordination with terminologies for alias mapping

Summary: We tested the API from the FEvIR Platform which uses the Classification Profile of the ArtifactAssessment Resource in its search index.  We established that Duodecim was able to use the API to specify a search and retrieve the desired FHIR Composition Resources. We were also able to demonstrate the “bonus point” where Duodecim specified their search request using their alias codes and received the search response based on SNOMED-CT and ICD-10 codes for a population and intervention (PICO) search. 


Recent Releases on the FEvIR Platform:

  • FEvIR® API version 0.6.0 (January 15, 2024) supports GET and POST requests to retrieve Resources from the FEvIR Platform, retrieve concept JSON from CodeSystem Resources on the FEvIR Platform, or convert a ClinicalTrials.gov record to a FHIR Bundle Resource.
    • Release 0.6.0 (January 15, 2024) supports search requests to retrieve search results with an array of JSON objects with title, url, and foi elements.
  • FEvIR®: CodeSystem Builder/Viewer version 0.40.0 (January 16, 2024) creates and displays code system terms (concepts) in a CodeSystem Resource.
    • Release 0.39.0 (January 15, 2024) Alternative Terms data entry for language (used for translations) now includes country-specific codes for German, English, Spanish, French, Italian, Dutch, Portuguese, and Chinese languages.
    • Release 0.40.0 (January 16, 2024) Alternative Terms data entry for language (used for translations) now includes country-specific codes for the Swedish language.
  • FEvIR®: RAte of Dissemination Assessment Research (RADAR) Tool (Version 0.1.0 (January 16, 2024) facilitates research conduct and project management for an investigation to measure the rate of scientific knowledge transfer.
    • Release 0.1.0 (January 16, 2024) introduces FEvIR®: RAte of Dissemination Assessment Research (RADAR) Tool to facilitate research conduct and project management for an investigation to measure the rate of scientific knowledge transfer.

Quote for Thought: "Knowing how to think empowers you far beyond those who know only what to think" - Neil DeGrasse Tyson

Joanne Dehnbostel

unread,
Jan 17, 2024, 8:54:15 PM1/17/24
to Health Evidence Knowledge Accelerator (HEvKA)

 

 

There were no HEvKA working group meetings today to allow for participation in the HL7 January 2024 Connectathon. Here is a report of our activities in the Evidence Based Medicine (EBM) Track at that meeting:

HL7 Connectathon Day 2-January 17, 2024

Attendance:

Brian Alper

Khalid Shahin

Joanne Dehnbostel

Ilkka Kunnamo

Gregor Lichtner

Dipti Gandhi

Julia Dawson

Matt Elrod

Renato Calamai

Ward Weistra

Tanesha Lindsay

 

Goal – Introduce EBMonFHIR IG

Summary: 

We demonstrated the display and editing of a ComparativeEvidenceReport Profile of Composition Resource with the Computable Publishing®: Comparative Evidence Report Viewing Tool (showing https://fevir.net/resources/Composition/178426) and Computable Publishing®: Comparative Evidence Report Authoring Tool, and also reviewed a ComparativeEvidence Profile of Evidence Resource (showing https://fevir.net/resources/Evidence/104155). 

We demonstrated the automated conversion of a ClinicalTrials.gov study record (NCT05704088) to FHIR Resources (starting with https://fevir.net/resources/ResearchStudy/193262) and how this uses the Group Resource for the study eligibility criteria.

 

Goal  – Solve complex examples of RelativeTime datatype (e.g. with Group.characteristic.timing use)

 

Summary:

To represent a timing of “From the beginning of the day before surgery until the end of the surgery” we suggested 2 RelativeTime datatype instances with a pattern like:

 

{ conceptCode: ‘beginning of the day of surgery’ or ’00:00 am on day of surgery’,

offsetDuration: > -1 day or > -24 hours },

{ conceptCode: ‘end of surgery’,

offsetDuration: < 0 }

 

To represent a characteristic of an intervention that it should be executed in at least 2 of 3 daily shifts, we considered either of:

 

Group.characteristic

              .code = “number of shifts/day with intervention executed” 

              .valueQuantity = >= 2

 

or

 

Group.combinationMethod = “at-least” and Group.combinationThreshold = 2 and 3 characteristics with each characteristic specifying 1 of the 3 daily shifts

 

 

 

Releases Today on the FEvIR Platform:

Computable Publishing®: MEDLINE-to-FEvIR Converter version 1.22.3 (January 17, 2024) converts PubMed MEDLINE XML to FHIR JSON.

Release 1.22.3 (January 17, 2024) adds compatibility with the RAte of Dissemination Assessment Research (RADAR) Tool and changes the alert message when a search query has no results.

FEvIR®: RAte of Dissemination Assessment Research (RADAR) Tool (Version 0.2.0 (January 17, 2024) facilitates research conduct and project management for an investigation to measure the rate of scientific knowledge transfer.

Release 0.2.0 (January 17, 2024) enables addition of Citations by MEDLINE search queries and improves visual appearance of multiple components.

Computable Publishing®: Comparative Evidence Report Authoring Tool version 0.13.0 (January 17, 2024) creates and displays a Composition Resource with a ComparativeEvidenceReport Profile.

Release 0.13.0 (January 17, 2024) autogenerates the Narrative from the first entry reference in the Population and Study Design sections on initial entry.

Computable Publishing®: ClinicalTrials.gov-to-FEvIR Converter version 4.10.1 (January 17, 2024) converts ClinicalTrials.gov JSON to FEvIR Resources in FHIR JSON.

Release 4.10.1 (January 17, 2024) corrects the ResearchStudy.recruitment.eligibility.type value from ‘EvidenceVariable’ to ‘Group’.

 

 

Quote for thought: "Practice makes progress, not perfect" - Unknown

Joanne Dehnbostel

unread,
Jan 19, 2024, 12:13:35 AM1/19/24
to Health Evidence Knowledge Accelerator (HEvKA)

 

 

 

 

 

There were no HEvKA working group meetings today to allow for participation in the HL7 January 2024 Connectathon (Regular HEvKA Meetings will resume tomorrow, Friday January 19).

 

HL7 Connectathon Day 3-January 18, 2024

A slide presentation used to summarize and present our activities can be seen here: https://docs.google.com/presentation/d/1jBSlbEAr2sUUeaR6t6_PJrMdvHMTcazZ/edit#slide=id.p1

 

We provided a report of the results of the Evidence Based Medicine Track during the entire Connectathon (January 16-18)-shown below

Evidence Based Medicine-“Report Out”

  • What was the track trying to achieve?

Goal 1 – introduce EBMonFHIR IG

Goal 2 – Demonstrate Population/Intervention searches for FHIR Compositions (using FHIR ArtifactAssessment - Classification Profile)

·       Bonus point – coordination with terminologies for alias mapping

Goal 3 – Solve complex examples of RelativeTime datatype (e.g. with Group.characteristic.timing use)

  • List of participants (with logos if you have time and energy)
    1. Brian Alper, Computable Publishing LLC (Track Lead)
    1. Khalid Shahin, Computable Publishing LLC (Track Lead)
    1. Joanne Dehnbostel, Computable Publishing LLC
    1. Ilkka Kunnamo, Duodecim
    1. Bruce Bray, University of Utah Health Care
    1. Brian Kaney, Vermonster LLC
    1. Liz Turi, ONC Technology Standards
    1. Renato Calamai, HL7 Italy
    1. Teresa Younkin, HL7 Da Vinci
    1. Sharon Hibay, Advanced Health Outcomes LLC
    1. Tanesha Lindsay, HHS, ONC
    1. Gregor Lichtner, Universitaetsmedizin Griefswald Germany
    1. Dipiti Gandhi, Gandhi Household
    1. Julia Dawson, The Joint Commision
    1. Matt Elrod,  Office of the National Coordinator for Health IT
    1. Ward Weistra, Firely

 

  • Notable achievements - All goals successfully achieved 

Goal 1 – Introduce EBMonFHIR IG


Summary: We introduced the Evidence Based Medicine Implementation Guide to representatives from multiple organizations. 

We demonstrated using the FEvIR Platform to create a computable guideline using FHIR Resources and discussed the ability to create an organization specific profile of the Guideline Profile for a professional society such as the American College of Cardiology. We demonstrated how we could convert Medline citations or RIS based citations to FHIR Citation Resources. 


We also introduced a representative from the HL7 DaVinci Accelerator to creating a cohort definition in FHIR (specifically the CohortDefinition Profile of Group Resource) and discussed examples for prior authorization and for risk adjustment (e.g., by defining a group of hospitals within a specific geographic area). 

We demonstrated the display and editing of a ComparativeEvidenceReport Profile of Composition Resource with the Computable Publishing®: Comparative Evidence Report Viewing Tool (showing https://fevir.net/resources/Composition/178426) and Computable Publishing®: Comparative Evidence Report Authoring Tool, and also reviewed a ComparativeEvidence Profile of Evidence Resource (showing https://fevir.net/resources/Evidence/104155). 

We demonstrated the automated conversion of a ClinicalTrials.gov study record (NCT05704088) to FHIR Resources (starting with https://fevir.net/resources/ResearchStudy/193262) and how this uses the Group Resource for the study eligibility criteria.

 

 

Goal 2 – Demonstrate Population/Intervention searches for FHIR Compositions (using FHIR ArtifactAssessment - Classification Profile)

Bonus point – coordination with terminologies for alias mapping

Summary: We tested the API from the FEvIR Platform which uses the Classification Profile of the ArtifactAssessment Resource in its search index.  We established that Duodecim was able to use the API to specify a search and retrieve the desired FHIR Composition Resources. We were also able to demonstrate the “bonus point” where Duodecim specified their search request using their alias codes and received the search response based on SNOMED-CT and ICD-10 codes for a population and intervention (PICO) search. 

 

Goal 3 – Solve complex examples of RelativeTime datatype (e.g. with Group.characteristic.timing use)

 

To represent a timing of “From the beginning of the day before surgery until the end of the surgery” we suggested 2 RelativeTime datatype instances with a pattern like:

 

{ conceptCode: ‘beginning of the day of surgery’ or ’00:00 am on day of surgery’,

offsetDuration: > -1 day or > -24 hours },

{ conceptCode: ‘end of surgery’,

offsetDuration: < 0 }

 

To represent a characteristic of an intervention that it should be executed in at least 2 of 3 daily shifts, we considered either of:

 

Group.characteristic

             .code = “number of shifts/day with intervention executed” 

             .valueQuantity = >= 2

 

or

 

Group.combinationMethod = “at-least” and Group.combinationThreshold = 2 and 3 characteristics with each characteristic specifying 1 of the 3 daily shifts

 

  • Screenshots and/or links to further information

 

https://fevir.net

 

  • Discovered issues / questions (if there are any)  

·       Created a new error message for search where no items are found (discovered in the MEDLINE-to-FEvIR Converter)

·       Created a JIRA ticket https://jira.hl7.org/browse/FHIR-43696 to Add RelativeTime to timing[x] in PlanDefinition and ActivityDefinition

 

Releases today on the FEvIR Platform:

 

Computable Publishing®: Comparative Evidence Report Authoring Tool version 0.14.0 (January 18, 2024) creates and displays a Composition Resource with a ComparativeEvidenceReport Profile.

Release 0.14.0 (January 18, 2024) autogenerates the Narrative from the first entry or focus reference throughout the Intervention, Comparator, Baseline Measures, Participant Flow, and Outcomes sections and subsections on initial entry.

FEvIR®: RAte of Dissemination Assessment Research (RADAR) Tool (version 0.3.0 (January 18, 2024) facilitates research conduct and project management for an investigation to measure the rate of scientific knowledge transfer.

Release 0.3.0 (January 18, 2024) highlights new Citations added when changing the Citation lists.

 

 

Quote for Thought: “You will face many defeats in life, but never let yourself be defeated.” -Maya Angelou

Joanne Dehnbostel MS, MPH

Research and Analysis Manager, Computable Publishing LLC

 

A picture containing text

Description automatically generated

 

Making Science Machine-Interpretable
http://computablepublishing.com 

 

 

Joanne Dehnbostel

unread,
Jan 20, 2024, 5:24:49 AM1/20/24
to Health Evidence Knowledge Accelerator (HEvKA)

 

 

 

 

 

 

 

10 people (BA, CA-D, HK, HL, JD, KS, KW, PW, SM, TD) participated today in up to 3 active working group meetings.

 

Risk of Bias Terminology Working Group Updates:

On January 19, the Risk of Bias Terminology Working Group found that 3 terms open for vote last week had received enough votes to approve them for the code system (bias in qualitative research designbias in qualitative data collection methodsbias in qualitative analysis). The other three terms open for vote last week received opposing votes so they were discussed, revised and re-opened for vote (mixed methods research biasincoherence among data, analysis, and interpretationinappropriate interpretation of integration of qualitative and quantitative findings). The group then discussed and defined three additional terms (bias in mixed methods research designineffective integration of qualitative and quantitative study componentsinadequate handling of inconsistency between qualitative and quantitative findings) so there are currently 6 terms open for vote. 

 

Term

Definition

Alternative Terms
(if any)

Comment for application
(if any)

Vote

Comment

Any mismatch among hypothesis, data collected, data analysis, and results interpretation in the study report.

The term mismatch applies to an inappropriate or wrong or inadequate relationship.

Vote Comment

mixed methods research bias

A bias specific to the alignment of design, conduct, analysis or reporting of qualitative research and quantitative research within the same research project.

Mixed methods research is a research approach that combines both qualitative and quantitative research methods within a single study or research project. This methodology aims to provide a more comprehensive understanding of a research problem by integrating the strengths of both qualitative and quantitative research. Examples of mixed methods research include combining surveys with in-depth interviews, using quantitative data to identify patterns and trends followed by qualitative data to explore the underlying reasons and meanings, or incorporating qualitative findings to help interpret and validate quantitative results. Overall, mixed methods research provides a more holistic understanding of a research question by acknowledging and leveraging the strengths of both qualitative and quantitative approaches.

Vote Comment

bias in mixed methods research design

A mixed methods research bias in which the mixed methods approach used in a study is not appropriate for the research question and problem.

  • inadequate rationale for mixed methods design

This signaling question in the Mixed Methods Assessment Tool (MMAT) is 5.1. Is there an adequate rationale for using a mixed methods design to address the research question?

Common mixed methods designs include:

Convergent design The QUAL and QUAN components are usually (but not necessarily) concomitant. The purpose is to examine the same phenomenon by interpreting QUAL and QUAN results (bringing data analysis together at the interpretation stage), or by integrating QUAL and QUAN datasets (e.g., data on same cases), or by transforming data (e.g., quantization of qualitative data).

Sequential explanatory design Results of the phase 1 - QUAN component inform the phase 2 - QUAL component. The purpose is to explain QUAN results using QUAL findings. E.g., the QUAN results guide the selection of QUAL data sources and data collection, and the QUAL findings contribute to the interpretation of QUAN results.

Sequential exploratory design Results of the phase 1 - QUAL component inform the phase 2 - QUAN component. The purpose is to explore, develop and test an instrument (or taxonomy), or a conceptual framework (or theoretical model). E.g., the QUAL findings inform the QUAN data collection, and the QUAN results allow a statistical generalization of the QUAL findings.

Key references: Creswell et al. (2011); Creswell and Plano Clark, (2017); O'Cathain (2010)

Reference - Hong QN, Pluye P, Fàbregues S, Bartlett G, Boardman F, Cargo M, Dagenais P, Gagnon M-P, Griffiths F, Nicolau B, O’Cathain A, Rousseau M-C, Vedel I. Mixed Methods Appraisal Tool (MMAT), version 2018. Registration of Copyright (#1148552), Canadian Intellectual Property Office, Industry Canada. Accessed at: http://mixedmethodsappraisaltoolpublic.pbworks.com/w/file/fetch/127916259/MMAT_2018_criteria-manual_2018-08-01_ENG.pdf

Vote Comment

ineffective integration of qualitative and quantitative study components

A mixed methods research bias in which the qualitative research and quantitative research components are not adequately combined.

Vote Comment

A mixed methods research bias in which the process of combining the results of the constituent analyses is flawed.

This criterion is related to meta-inference, which is defined as the overall interpretations derived from integrating qualitative and quantitative findings (Teddlie and Tashakkori, 2009). Meta-inference occurs during the interpretation of the findings from the integration of the qualitative and quantitative components, and shows the added value of conducting a mixed methods study rather than having two separate studies. (Pluye et al 2018)

Vote Comment

inadequate handling of inconsistency between qualitative and quantitative findings

A mixed methods research bias in which discrepancies in the results from the qualitative and quantitative components are not adequately addressed.

When integrating the findings from the qualitative and quantitative components, divergences and inconsistencies (also called conflicts, contradictions, discordances, discrepancies, and dissonances) can be found. It is not sufficient to only report the divergences; they need to be explained. Different strategies to address the divergences have been suggested such as reconciliation, initiation, bracketing and exclusion (Pluye et al., 2009b). (Reference - Hong QN, Pluye P, Fàbregues S, Bartlett G, Boardman F, Cargo M, Dagenais P, Gagnon M-P, Griffiths F, Nicolau B, O’Cathain A, Rousseau M-C, Vedel I. Mixed Methods Appraisal Tool (MMAT), version 2018. Registration of Copyright (#1148552), Canadian Intellectual Property Office, Industry Canada. Accessed at: http://mixedmethodsappraisaltoolpublic.pbworks.com/w/file/fetch/127916259/MMAT_2018_criteria-manual_2018-08-01_ENG.pdf)

To participate you can join the Scientific Evidence Code System (SEVCO) Expert Working Group at https://fevir.net/resources/Project/27845.

 

GRADE Ontology Working Group Updates:

On January 19, the GRADE Ontology Working Group found that the term open for voting last week (Risk of Bias) received opposing votes. The group worked to revise the definition and comment for application for the term Risk of Bias and this term is once again open for voting.  Paul Whaley volunteered to curate the comments for the term inconsistency to be used for the future discussion of that term. The group also briefly discussed the GRADE meeting taking place in Miami in May. Our May 3 GRADE Ontology meeting may be cancelled to allow participation and travel from the Miami meeting (to be discussed further at a later date).

 

 

Term

Definition

Alternative Terms
(if any)

Comment for application
(if any)

Risk of bias

The potential for systematic error in the results or findings of a study or across studies due to limitations in their design, conduct, or reporting.

In the definition of risk of bias, the "potential for" covers the likelihood of and the magnitude of systematic error. A "systematic error" is a difference between the reported results of a study or studies (findings, conclusions, or effect estimates) and the actuality (the truth, the estimand, or the true value targeted for estimation). The systematic error may occur at any stage in the conception and design of a study or in the collection, analysis, interpretation, or reporting of data.

Within the GRADE Certainty of Evidence framework, the term "risk of bias" exclusively concerns limitations in the internal validity of the body of evidence for a single outcome. Related terms used by others for "risk of bias" include ‘‘internal validity problems’, ‘‘study limitations’’, and "methodological limitations".

Risk of bias is one of the domains that can impact the rating of the certainty of evidence. The potential for systematic error from other sources, such as external validity (indirectness), publication bias, and inconsistency, are dealt with separately.

For computer applications (i.e. communication via machine-interpretable code), the term "risk of bias" is not encouraged. Instead users should either use one of the more specific codes "Risk of bias across studies" or "Risk of bias within a study" to more precisely represent the context of the risk of bias judgements being coded.

 

 

Project Coordination Updates

 

§  The Project Management Working Group prepared a suggested agenda for the week: 

        • Monday, January 22 :  
          • 8 am Eastern: Project Management - FHIR Changes and EBMonFHIR Implementation Guide issues
          • 9 am Eastern: Setting the Scientific Record on FHIR Working Group - Review objectives and priorities, development of GRADEpro to FEvIR Converter
          • 10 am Eastern: CQL Development Working Group (HL7 CDS EBMonFHIR sub-WG) - Develop objectives and priorities
          • 2 pm Eastern: Statistic Terminology Working Group-SEVCO terms for measures of dispersion and calibration (8 terms open for vote)
          • 3 pm Eastern: HL7 Learning Health Systems (LHS) WG -  follow up and report out from HL7 Connectathon

Tuesday, January 23:

          • 9 am Eastern: Measuring the Rate of Scientific Knowledge Transfer- SKAF Board of Directors Meeting and Board Elections , Review changes for overall project support for Measuring the Rate of Scientific Knowledge Transfer Project
          • 2 pm Eastern: StatisticsOnFHIR Working Group (HL7 CDS EBMonFHIR sub-WG) – Review objectives and priorities, review progress on ResearchStudy example
          • 3 pm Eastern: Ontology Management Working Group – Review objectives and priorities
          • 4 pm Eastern: HL7 BRR Working Group Meeting

    Wednesday, January 24

            • 8 am Eastern: Funding the Knowledge Ecosystem Infrastructure Working Group –Develop value proposition for making guidelines computable
            • 9 am Eastern: Communications Working Group –Publications (Study Design Paper), Presentations, (HL7 Working Group Meeting Education Session, Global Evidence Summit 2), Website

    Thursday, January 25:

            • 8 am Eastern: EBM Implementation Guide Working Group (HL7 CDS EBMonFHIR sub-WG) – GIN Tech Working Group Meeting, Review IG Ballot Feedback, Prepare EBMonFHIR IG CodeSystem for HL7 Terminology
            • 9 am Eastern: Computable EBM Tools Development Working Group – Review objectives and priorities
            • 12 pm Eastern: HL7 BRR Work Group 

      Friday, January 26:

              • 9 am Eastern: Risk of Bias Terminology Working Group – review SEVCO terms for qualitative research bias (6 terms open for vote)
              • 10 am Eastern: GRADE Ontology Working Group – term development (Risk of Bias, Inconsistency)
              • 12 pm Eastern: Project Management - Prepare weekly agenda for next week

       

      Releases Today on the FEvIR Platform: 

      FEvIR®: RAte of Dissemination Assessment Research (RADAR) Tool (Version 0.4.0 (January 19, 2024) facilitates research conduct and project management for an investigation to measure the rate of scientific knowledge transfer.

      Release 0.4.0 (January 19, 2024) Now allows an investigation administrator to see the users of their investigation and the ability to give out invite links for others to join.

       

      Quote for thought: “The trouble with having an open mind, of course, is that people will insist on coming along and trying to put things in it.” ― Terry Pratchett, Diggers

      Joanne Dehnbostel

      unread,
      Jan 23, 2024, 5:01:52 AM1/23/24
      to Health Evidence Knowledge Accelerator (HEvKA)

       

       

       

      7 people (BA, CE , HL, JD, KS, KW, MA) participated today in up to 4 active working group meetings.

      Today the Project Management Working Group and the Setting the Scientific Record on FHIR Working Group discussed FHIR change requests and EBMonFHIR Implementation Guide ballot comments. 

      The CQL Development Working Group (HL7 CDS EBMonFHIR sub-WG) discussed the future objectives and priorities for the group including possible collaboration with NIH entities.

      The Statistic Terminology Working Group-reviewed the progress of voting on the 8 terms that have been out for vote and found that two of the terms had received enough votes to be approved (standard error of the mean, Standard error of the proportion) leaving 6 terms which, although none have opposing votes, do not have enough votes to be approved (calibration intercept, calibration slope, mean calibration, standard error of the difference between independent means , standard error of the difference between independent proportions, measure of discrimination).  There are currently 6 terms open for vote.

      Term

      Definition

      Alternative Terms

      Comment for application

      calibration intercept

      A measure of calibration that is the difference between the mean expected value and the mean observed value.

      calibration-in-the-large

      The calibration intercept is computed from a statistical model for calibration of binary variables (0 or 1) where the log odds of the predicted probabilities is a linear function of the empirical frequencies. The notion of calibration in the large is that the intercept is a gross assessment of whether the average prediction matches the average outcome, however, this interpretation is exquisitely sensitive to the choice of referent factors within the prediction model.

      calibration slope

      A measure of calibration computed from the rate of change in observed value per unit change in the predicted value.

      calibration-in-the small

      The calibration slope is computed from a statistical model for calibration of binary variables (0 or 1) where the log odds of the predicted probabilities is a linear function of the empirical frequencies. 1 represents a perfect calibration, 0 represents lack of correlation. Inspection of the graph allows identification of areas of under and over confidence.

      mean calibration

      A measure of calibration that is the average of a function of the difference between the expected values and the observed values.

      For predictive modeling of non-continuous variables, the mean calibration is a measure of calibration that is the average of a function of the difference between the expected probabilities and the observed frequencies. The expected values may be computed (as in predictive models) or may be derived from reference data (as typical for a measurement device). When the function is the square of the difference and the variables are binary (0 or 1), the measure is called the Brier score.

      A measure of dispersion applied to differences between means of independent groups across hypothetical repeated random samples.

      <See standard error of the difference between independent means for the comment for application which includes the formula.>

      standard error of the difference between independent proportions

      A measure of dispersion applied to differences between proportions arising from independent groups across hypothetical repeated random samples.

      <See standard error of the difference between independent proportions for the comment for application which includes the formula.>

      measure of discrimination

      A statistic that quantifies the degree to which a classifier can distinguish among two or more groups..

      A classifier is a rule, formula, algorithm, or procedure used to label an instance based on its attributes.

      To participate you can join the Scientific Evidence Code System (SEVCO) Expert Working Group at https://fevir.net/resources/Project/27845.

      The Statistic Terminology Working Group also discussed entering additional data into FHIR for an actual research study: Telemedicine for Depression in Primary Care - Power analysis [Database Entry: FHIR Evidence Resource]. Contributors: Harold Lehmann [Authors/Creators]. In: Fast Evidence Interoperability Resources (FEvIR) Platform, FOI 191597. Revised 2024-01-09. Available at: https://fevir.net/resources/Evidence/191597.

      The HL7 Learning Health Systems (LHS) Working Group reviewed Connectathon achievements of the EBMonFHIR track. https://docs.google.com/presentation/d/1jBSlbEAr2sUUeaR6t6_PJrMdvHMTcazZ/edit#slide=id.p1, Discussed CPG ballot comments, and created a Jira ticket to suggest reconciling overlaps between CPG and EBMonFHIR implementation guides.   https://jira.hl7.org/browse/FHIR-44160

       

      Quote for Thought: "Life is a long lesson in humility” – James M. Barrie

      Joanne Dehnbostel

      unread,
      Jan 23, 2024, 10:53:45 PM1/23/24
      to Health Evidence Knowledge Accelerator (HEvKA)

       

       

      11 people (BA, CE , HL, IK, JD, JT, KR, KS, KW, MH, RC) participated today in up to 3 active working group meetings.

      Today the SKAF Board of Directors met and held elections to renew the terms of existing board members, they also discussed the objectives of the organization and changed the wording of the bylaws regarding meeting times. 

      The StatisticsOnFHIR Working Group (HL7 CDS EBMonFHIR sub-WG) and Ontology Management Working Group continued progress on a ResearchStudy example,  https://fevir.net/resources/project/195718 and discussed the future of the journal article representing our SEVCO Study Design Code System. 

      The HL7 BRR Working Group discussed HL7 Connectathon highlights from last week, HL7 Working Group Meeting plans for next week, and resolved   to create a new backbone element for ResearchSubject Resource. 

      Quote for thought: " We don't make mistakes; we have happy accidents." --Bob Ross

      Correction- Yesterday’s daily update stated that the term mean calibration did not have enough votes to be approved as a term in the SEVCO code system when in fact it was approved and is no longer open for vote. The summary of the meeting should have said:

      The Statistic Terminology Working Group-reviewed the progress of voting on the 8 terms that have been out for vote and found that three of the terms had received enough votes to be approved (standard error of the mean, Standard error of the proportion , mean calibration) leaving 5 terms which, although none have opposing votes, do not have enough votes to be approved (calibration intercept, calibration slope, standard error of the difference between independent means , standard error of the difference between independent proportions, measure of discrimination).  There are currently 5 terms open for vote.

      Term

      Definition

      Alternative Terms

      Comment for application

      calibration intercept

      A measure of calibration that is the difference between the mean expected value and the mean observed value.

      calibration-in-the-large

      The calibration intercept is computed from a statistical model for calibration of binary variables (0 or 1) where the log odds of the predicted probabilities is a linear function of the empirical frequencies. The notion of calibration in the large is that the intercept is a gross assessment of whether the average prediction matches the average outcome, however, this interpretation is exquisitely sensitive to the choice of referent factors within the prediction model.

      calibration slope

      A measure of calibration computed from the rate of change in observed value per unit change in the predicted value.

      calibration-in-the small

      The calibration slope is computed from a statistical model for calibration of binary variables (0 or 1) where the log odds of the predicted probabilities is a linear function of the empirical frequencies. 1 represents a perfect calibration, 0 represents lack of correlation. Inspection of the graph allows identification of areas of under and over confidence.

      A measure of dispersion applied to differences between means of independent groups across hypothetical repeated random samples.

      <See standard error of the difference between independent means for the comment for application which includes the formula.>

      standard error of the difference between independent proportions

      A measure of dispersion applied to differences between proportions arising from independent groups across hypothetical repeated random samples.

      <See standard error of the difference between independent proportions for the comment for application which includes the formula.>

      measure of discrimination

      A statistic that quantifies the degree to which a classifier can distinguish among two or more groups..

      A classifier is a rule, formula, algorithm, or procedure used to label an instance based on its attributes.

      To participate you can join the Scientific Evidence Code System (SEVCO) Expert Working Group at https://fevir.net/resources/Project/27845.

      To get involved or stay informed: HEvKA Project Page on FEvIR Platform, HEvKA Project Page on HL7 Confluence, or join any of the groups that are now meeting in the following weekly schedule:

      Joanne Dehnbostel

      unread,
      Jan 25, 2024, 7:03:30 PM1/25/24
      to Health Evidence Knowledge Accelerator (HEvKA)

       

       

       

      10 people (BA, CE , CW, ER, IK, IK , JD, KR, KS, VK) participated today in up to 2 active working group meetings.

       

      Today's EBM Implementation Guide Working Group (HL7 CDS EBMonFHIR sub-WG) was the first GIN Tech Working Group Meeting since Dr. Brian S. Alper became Chair of this group. Members introduced themselves and got an introduction to the relationship between GINtech and EBMonFHIR which ultimately lead to the Health Evidence Knowledge Accelerator (HEvKA) project. 

      As its first effort, the group decided to propose a workshop for the Global Evidence Summit in Prague in September 2024. Proposals are due February 21, 2024

      A sub-group will meet Wednesday February 10, 2024, at 8am Eastern time, in the HEvKa meeting room to create a plan for this workshop.

      A proposed title for the workshop is "What is Needed to Make Guidelines Computable? : A Consensus Development Exercise". The activity of the workshop would be to answer the question, the ACCORD concept could be used. This may also be the title of a resulting publication which could be submitted to the new GIN Journal.

      The Computable EBM Tools Development Working Group worked to develop tooling for the GRADEpro to FHIR software on the FEvIR Platform.

      The HL7 BRR Working Group continued working on a confluence page to help with expressing eligibility criteria using the FHIR Group Resource, the link to this page is below:

      Quote for thought: "You don't have to see the whole staircase, just take the first step." --Dr. Martin Luther King Jr.

      Joanne Dehnbostel

      unread,
      Jan 26, 2024, 12:56:27 PM1/26/24
      to Health Evidence Knowledge Accelerator (HEvKA)

       

      7 people (BA, JD, JO, JT, KR, KS, MA) participated today in up to 2 active working group meetings.

      Today the Funding the Knowledge Ecosystem Infrastructure Working Group discussed potential funding sources.
      The Communications Working Group  discussed submitting our SEVCO Study Design Terminology paper to the International Journal of Epidemiology https://academic.oup.com/ije/pages/General_Instructions and planned the presentation that will be given as an HL7 Working Group Meeting Education Session next week. The topic of the session will be the EBMonFHIR Implementation Guide. 

      The HL7 CDS Working Group revised the schedule for the upcoming HL7 Working Group Meeting and discussed implications of the new ONC HTI-1 regulations on the CDS industry. More information can be found here Federalregister.gov/documents/2024/01/09/2023-28857/health-data-technology-and-interoperability-certification-program-updates-algorithm-transparency-and

      Quote for Thought: "The chief function of the body is to carry the brain around." --Thomas A. Edison

      Joanne Dehnbostel

      unread,
      Jan 27, 2024, 8:46:27 AM1/27/24
      to Health Evidence Knowledge Accelerator (HEvKA)

       

       

       

       

       

      13 people (BA, CA-D, HK, HL, JB, JD, KS, KW, MA, PW, SL, TD, TL) participated today in up to 3 active working group meetings.

       

      The SEVCO Expert Working Group has approved 390 of 603 (64.68%) terms so far. There are currently 10 Risk of Bias terms, and 5 Statistic terms open for voting.  

       

      Today the Risk of Bias Terminology Working Group found that the previous 6 terms open for vote did not receive enough votes for approval  (incoherence among data, analysis, and interpretationmixed methods research biasbias in mixed methods research designineffective integration of qualitative and quantitative study componentsinappropriate interpretation of integration of qualitative and quantitative findingsinadequate handling of inconsistency between qualitative and quantitative findings).

       

      The group then defined an additional 4 terms (bias in study eligibility criteriastudy eligibility criteria not prespecifiedstudy eligibility criteria not appropriate for review questionstudy eligibility criteria ambiguous) so there are currently ten risk of bias terms open for vote.

       

      An additional term (study eligibility criteria limits for study characteristics not appropriate) was discussed but not finished and will be discussed again next week before being released for vote.

       

      Term

      Definition

      Alternative Terms
      (if any)

      Comment for application
      (if any)

      bias in study eligibility criteria

      A study selection bias specific to the inclusion and exclusion criteria.

      If the study eligibility criteria (inclusion and exclusion criteria for study selection) result in a dataset that is non-representative of the population of interest, then the criteria introduce systematic error.

      A study selection bias is a selection bias resulting from factors that influence study selection, from methods used to include or exclude studies for evidence synthesis, or from differences between the study sample and the population of interest.

      study eligibility criteria not prespecified

      A bias in study eligibility criteria in which the criteria are not stated before the study selection process occurs.

      Failure to specify the study eligibility criteria before evaluating studies for selection can lead to a situation in which the data discovered during the study selection process influences the criteria for selection in a way that introduces systematic error.

      study eligibility criteria not appropriate for review question

      A bias in study eligibility criteria in which there is a mismatch between the selected studies and the research question.

      study eligibility criteria ambiguous

      A bias in study eligibility criteria due to unclear specification.

      Eligibility criteria that are not sufficiently described to enable reproduction of study selection can introduce systematic error.

      incoherence among data, analysis, and interpretation

      Any mismatch among hypothesis, data collected, data analysis, and results interpretation in the study report.

      The term mismatch applies to an inappropriate or wrong or inadequate relationship.

      A bias specific to the alignment of design, conduct, analysis or reporting of qualitative research and quantitative research within the same research project.

      Mixed methods research is a research approach that combines both qualitative and quantitative research methods within a single study or research project. This methodology aims to provide a more comprehensive understanding of a research problem by integrating the strengths of both qualitative and quantitative research. Examples of mixed methods research include combining surveys with in-depth interviews, using quantitative data to identify patterns and trends followed by qualitative data to explore the underlying reasons and meanings, or incorporating qualitative findings to help interpret and validate quantitative results. Overall, mixed methods research provides a more holistic understanding of a research question by acknowledging and leveraging the strengths of both qualitative and quantitative approaches.

      A mixed methods research bias in which the mixed methods approach used in a study is not appropriate for the research question and problem.

      inadequate rationale for mixed methods design

      This signaling question in the Mixed Methods Assessment Tool (MMAT) is 5.1. Is there an adequate rationale for using a mixed methods design to address the research question?

      Common mixed methods designs include:

      Convergent design The QUAL and QUAN components are usually (but not necessarily) concomitant. The purpose is to examine the same phenomenon by interpreting QUAL and QUAN results (bringing data analysis together at the interpretation stage), or by integrating QUAL and QUAN datasets (e.g., data on same cases), or by transforming data (e.g., quantization of qualitative data).

      Sequential explanatory design Results of the phase 1 - QUAN component inform the phase 2 - QUAL component. The purpose is to explain QUAN results using QUAL findings. E.g., the QUAN results guide the selection of QUAL data sources and data collection, and the QUAL findings contribute to the interpretation of QUAN results.

      Sequential exploratory design Results of the phase 1 - QUAL component inform the phase 2 - QUAN component. The purpose is to explore, develop and test an instrument (or taxonomy), or a conceptual framework (or theoretical model). E.g., the QUAL findings inform the QUAN data collection, and the QUAN results allow a statistical generalization of the QUAL findings.

      Key references: Creswell et al. (2011); Creswell and Plano Clark, (2017); O'Cathain (2010)

      Reference - Hong QN, Pluye P, Fàbregues S, Bartlett G, Boardman F, Cargo M, Dagenais P, Gagnon M-P, Griffiths F, Nicolau B, O’Cathain A, Rousseau M-C, Vedel I. Mixed Methods Appraisal Tool (MMAT), version 2018. Registration of Copyright (#1148552), Canadian Intellectual Property Office, Industry Canada. Accessed at: http://mixedmethodsappraisaltoolpublic.pbworks.com/w/file/fetch/127916259/MMAT_2018_criteria-manual_2018-08-01_ENG.pdf

      A mixed methods research bias in which the qualitative research and quantitative research components are not adequately combined.

      A mixed methods research bias in which the process of combining the results of the constituent analyses is flawed.

      This criterion is related to meta-inference, which is defined as the overall interpretations derived from integrating qualitative and quantitative findings (Teddlie and Tashakkori, 2009). Meta-inference occurs during the interpretation of the findings from the integration of the qualitative and quantitative components, and shows the added value of conducting a mixed methods study rather than having two separate studies. (Pluye et al 2018)

      A mixed methods research bias in which discrepancies in the results from the qualitative and quantitative components are not adequately addressed.

      When integrating the findings from the qualitative and quantitative components, divergences and inconsistencies (also called conflicts, contradictions, discordances, discrepancies, and dissonances) can be found. It is not sufficient to only report the divergences; they need to be explained. Different strategies to address the divergences have been suggested such as reconciliation, initiation, bracketing and exclusion (Pluye et al., 2009b). (Reference - Hong QN, Pluye P, Fàbregues S, Bartlett G, Boardman F, Cargo M, Dagenais P, Gagnon M-P, Griffiths F, Nicolau B, O’Cathain A, Rousseau M-C, Vedel I. Mixed Methods Appraisal Tool (MMAT), version 2018. Registration of Copyright (#1148552), Canadian Intellectual Property Office, Industry Canada. Accessed at: http://mixedmethodsappraisaltoolpublic.pbworks.com/w/file/fetch/127916259/MMAT_2018_criteria-manual_2018-08-01_ENG.pdf)

      To participate you can join the Scientific Evidence Code System (SEVCO) Expert Working Group at https://fevir.net/resources/Project/27845.

       

       

       

      The GRADE Ontology Working Group found that the term (Risk of Bias) had enough votes to be approved and added to the code system. The group then worked on definitions and comments for application for two “child terms” of risk of bias (Risk of bias across studiesRisk of bias within a study) and these terms are open for voting. The group then defined a third term (Inconsistency).There are currently 3 terms open for voting.

       

      The meeting was recorded, and the recording can be viewed here.

       

      The HEvKA GRADE Ontology Working Group currently has 51 members.

       

       Term

      Definition

      Alternative Terms

      (if any)

      Comment for Application (if any)

      Risk of bias across studies

      The potential for systematic error in results or findings across studies due to limitations in their design, conduct, or reporting.

      In the definition of risk of bias, the "potential for" covers the likelihood of and the magnitude of systematic error. A "systematic error" is a difference between the reported results of a study or studies (findings, conclusions, or effect estimates) and the actuality (the truth, the estimand, or the true value targeted for estimation). The systematic error may occur at any stage in the conception and design of a study or in the collection, analysis, interpretation, or reporting of data.

      Within the GRADE Certainty of Evidence framework, the term "risk of bias" exclusively concerns limitations in the internal validity of the body of evidence for a single outcome. Related terms used by others for "risk of bias" include ‘‘internal validity problems’, ‘‘study limitations’’, and "methodological limitations".

      Risk of bias is one of the domains that can impact the rating of the certainty of evidence. The potential for systematic error from other sources, such as external validity (indirectness), publication bias, and inconsistency, are dealt with separately.

      For computer applications (i.e. communication via machine-interpretable code), the term "risk of bias" is not encouraged. Instead users should either use one of the more specific codes "Risk of bias across studies" or "Risk of bias within a study" to more precisely represent the context of the risk of bias judgements being coded.

      The potential for systematic error in the results or findings from a single study due to limitations in its design, conduct, or reporting.

      In the definition of risk of bias, the "potential for" covers the likelihood of and the magnitude of systematic error. A "systematic error" is a difference between the reported results of a study or studies (findings, conclusions, or effect estimates) and the actuality (the truth, the estimand, or the true value targeted for estimation). The systematic error may occur at any stage in the conception and design of a study or in the collection, analysis, interpretation, or reporting of data.

      Related terms used by others for "risk of bias" include ‘‘internal validity problems’, ‘‘study limitations’’, and "methodological limitations".

      For computer applications (i.e. communication via machine-interpretable code), the term "risk of bias" is not encouraged. Instead users should either use one of the more specific codes "Risk of bias across studies" or "Risk of bias within a study" to more precisely represent the context of the risk of bias judgements being coded.

      Inconsistency

      Variations in the findings that were considered to estimate the effect.

      Variations may include differences across estimates from different studies or may include differences across estimates from different analyses (such as sensitivity analyses) within studies. Variations may include differences in the magnitude or direction of the findings. Criteria for evaluating inconsistency include (but are not limited to) similarity of point estimates, extent of overlap of confidence intervals, and statistical criteria including tests of heterogeneity, I2 or Tau2. Inconsistency is one of the domains that can impact the rating of the certainty of evidence. Related terms used by others for "inconsistency" include "heterogeneity", "statistical heterogeneity", and "clinical heterogeneity". Although any variations may be considered inconsistency, rating down the certainty of evidence due to inconsistency implies large, unexplained, or unjustified inconsistency.

       

       

       

      The Project Management Working Group prepared a suggested agenda for the week of January 29: Several Meetings have been cancelled to allow for participation in the HL7 Working Group Meeting

      Monday, January 29 :  

          • 8 am Eastern: Project Management - Cancelled for HL7 WGM (BRR Q3)
          • 9 am Eastern: Setting the Scientific Record on FHIR Working Group - Cancelled for HL7 WGM (BRR Q3)
          • 10 am Eastern: CQL Development Working Group (HL7 CDS EBMonFHIR sub-WG) - Cancelled for HL7 WGM
          • 2 pm Eastern: Statistic Terminology Working Group-SEVCO terms for measures of dispersion and calibration (5terms open for vote)
          • 3 pm Eastern: HL7 Learning Health Systems (LHS) WG - Not meeting this week due to HL7 WGM

      Tuesday, January 30:

          • 9 am Eastern: Measuring the Rate of Scientific Knowledge Transfer-  Cancelled for HL7 WGM (CDS Q3)
          • 2 pm Eastern: StatisticsOnFHIR Working Group (HL7 CDS EBMonFHIR sub-WG) – Review objectives and priorities, review progress on ResearchStudy example
          • 3 pm Eastern: Ontology Management Working Group – Review objectives and priorities
          • 4 pm Eastern: HL7 BRR Working Group Meeting - Not meeting this week due to HL7 WGM

      Wednesday, January 31

          • 8 am Eastern: Funding the Knowledge Ecosystem Infrastructure Working Group –Prepare proposals for Global Evidence Summit2
          • 9 am Eastern: Communications Working Group –Publications (Study Design Paper), Presentations, (Global Evidence Summit 2), Website

      Thursday, February 1:

          • 8 am Eastern: EBM Implementation Guide Working Group (HL7 CDS EBMonFHIR sub-WG) – Cancelled for HL7 WGM
          • 9 am Eastern: Computable EBM Tools Development Working Group – Cancelled for HL7 WGM
          • 12 pm Eastern: HL7 BRR Work Group-not meeting this week 

      Friday, January 26:

          • 9 am Eastern: Risk of Bias Terminology Working Group – review SEVCO terms  (10 terms open for vote)
          • 10 am Eastern: GRADE Ontology Working Group – term development (Risk of Bias, Inconsistency)
          • 12 pm Eastern: Project Management - Prepare weekly agenda for next week

         

        For those going to the HL& Working Group Meeting, the following sessions are of special interest for the EBMonFHIR project:

          • Monday, January 29 :  
          • 8:45-10AM Eastern Biomedical Research and Regulation (BRR) Q3 general session
          • Tuesday
          • 8:45-10AM Eastern Clinical Decision Support (CDS) Q3 EBMonFHIR session
          • 10:00-10:30AM Eastern EBMonFHIR Implementation Guide introductory  Information Session
          • Wednesday
          • 6am-7:30AM Eastern  Biomedical Research and Regulation (BRR) Q2 Clinical research
          • Thursday
          • 8:45-10 Learning Health Systems (LHS) Q3 EBMonFHIR update

         

        Quote for Thought: “Outside of a dog, a book is man’s best friend. Inside of a dog, it’s too dark to read.”—Groucho Marx

        Joanne Dehnbostel

        unread,
        Jan 30, 2024, 5:34:39 AM1/30/24
        to Health Evidence Knowledge Accelerator (HEvKA)

         

         

         

         

         

         

         

         

         

         

        5 people (BA, HL, JD, KS, KW) participated today in 1 active working group meeting.

        Today the Project Management, Setting the Scientific Record on FHIR Working Group, and CQL Development Working Group (HL7 CDS EBMonFHIR sub-WG) meetings were cancelled to allow participation in the January 2024 HL7 Working Group Meeting

        The Statistic Terminology Working Group found that all 5 of the terms open for vote last week received only 3 votes, so none of the terms have enough votes to be approved for the code system. The SEVCO Expert Working Group has approved 390 of 603 (64.68%) terms so far. There are currently 10 Risk of Bias terms, and 5 Statistic terms open for voting.  

        However, the term mean calibration, open for voting the previous week, did receive enough votes for approval and was incorrectly reported last week as still open for vote. I apologize for any confusion this may have caused. 

        The 5 statistic terms currently open for voting are listed below:

         

        Term

        Definition

        Alternative Terms

        Comment for application

        calibration intercept

        A measure of calibration that is the difference between the mean expected value and the mean observed value.

        calibration-in-the-large

        The calibration intercept is computed from a statistical model for calibration of binary variables (0 or 1) where the log odds of the predicted probabilities is a linear function of the empirical frequencies. The notion of calibration in the large is that the intercept is a gross assessment of whether the average prediction matches the average outcome, however, this interpretation is exquisitely sensitive to the choice of referent factors within the prediction model.

        calibration slope

        A measure of calibration computed from the rate of change in observed value per unit change in the predicted value.

        calibration-in-the small

        The calibration slope is computed from a statistical model for calibration of binary variables (0 or 1) where the log odds of the predicted probabilities is a linear function of the empirical frequencies. 1 represents a perfect calibration, 0 represents lack of correlation. Inspection of the graph allows identification of areas of under and over confidence.

        standard error of the difference between independent means

        A measure of dispersion applied to differences between means of independent groups across hypothetical repeated random samples.

        <See standard error of the difference between independent means for the comment for application which includes the formula.>

        standard error of the difference between independent proportions

        A measure of dispersion applied to differences between proportions arising from independent groups across hypothetical repeated random samples.

        <See standard error of the difference between independent proportions for the comment for application which includes the formula.>

        measure of discrimination

        A statistic that quantifies the degree to which a classifier can distinguish among two or more groups..

        A classifier is a rule, formula, algorithm, or procedure used to label an instance based on its attributes.

        The Statistic Terminology Working Group then continued to work on the application of study eligibility criteria in a real research study example using Group Resource. https://fevir.net/resources/Group/183488 .

        January 2024 HL7 Working Group Meeting

        Two HL7 Working Group Meeting attendees joined us in a virtual "meetup" for an Introduction to the FEvIR Platform. 

        Biomedical Research and Regulation Working Group Monday Q3-The group worked through Jira tickets regarding the ResearchSubject Resource. 

        Quote for Thought: "Happiness is not achieved by the conscious pursuit of happiness; it is generally the by-product of other activities."–Aldous Huxley

        Joanne Dehnbostel

        unread,
        Jan 30, 2024, 12:58:03 PM1/30/24
        to Health Evidence Knowledge Accelerator (HEvKA), Health Evidence Knowledge Accelerator (HEvKA) Weekly Update

         

         

         

         

         

         

         

         

         

        24 people (BA, CA-D, CE , CW, ER, HK, HL, IK, IK , JB, JD, JO, JT, KR, KS, KW, MA, MH, PW, RC, SL, TD, TL, VK) from 10 countries (Canada, Chile/Spain, China, Finland, Germany, Italy, Norway, Peru, UK, USA) participated this week in up to 14 active working group meetings.

        Some of the HEvKA Meetings are cancelled next week to allow participation in the January 2024 HL7 Working Group Meeting-See Below for details:

        Project Coordination Updates:

        • On January 26, The Project Management Working Group prepared a suggested agenda for the week of January 29 - February 2: 
          • Monday, January 29 :  
            • 8 am Eastern: Project Management - Cancelled for HL7 WGM (BRR Q3)
            • 9 am Eastern: Setting the Scientific Record on FHIR Working Group - Cancelled for HL7 WGM (BRR Q3)
            • 10 am Eastern: CQL Development Working Group (HL7 CDS EBMonFHIR sub-WG) - Cancelled for HL7 WGM
            • 2 pm Eastern: Statistic Terminology Working Group-SEVCO terms for measures of dispersion and calibration (5terms open for vote)
            • 3 pm Eastern: HL7 Learning Health Systems (LHS) WG - Not meeting this week due to HL7 WGM

        Tuesday, January 30:

            • 9 am Eastern: Measuring the Rate of Scientific Knowledge Transfer-  Cancelled for HL7 WGM (CDS Q3)
            • 2 pm Eastern: StatisticsOnFHIR Working Group (HL7 CDS EBMonFHIR sub-WG) – Review objectives and priorities, review progress on ResearchStudy example
            • 3 pm Eastern: Ontology Management Working Group – Review objectives and priorities
            • 4 pm Eastern: HL7 BRR Working Group Meeting - Not meeting this week due to HL7 WGM

        Wednesday, January 31

            • 8 am Eastern: Funding the Knowledge Ecosystem Infrastructure Working Group –Prepare proposals for Global Evidence Summit2
            • 9 am Eastern: Communications Working Group –Publications (Study Design Paper), Presentations, (Global Evidence Summit 2), Website

        Thursday, February 1:

            • 8 am Eastern: EBM Implementation Guide Working Group (HL7 CDS EBMonFHIR sub-WG) – Cancelled for HL7 WGM
            • 9 am Eastern: Computable EBM Tools Development Working Group – Cancelled for HL7 WGM
            • 12 pm Eastern: HL7 BRR Work Group-not meeting this week 

        Friday, February 2:

            • 9 am Eastern: Risk of Bias Terminology Working Group – review SEVCO terms  (10 terms open for vote)
            • 10 am Eastern: GRADE Ontology Working Group – term development (Risk of Bias, Inconsistency)
            • 12 pm Eastern: Project Management - Prepare weekly agenda for next week

        For those going to the HL7 Working Group Meeting, the following sessions are of special interest for the EBMonFHIR project:

              • Monday, January 29 :  
                • 8:45-10AM Eastern Biomedical Research and Regulation (BRR) Q3 general session
              • Tuesday
                • 8:45-10AM Eastern Clinical Decision Support (CDS) Q3 EBMonFHIR session
                • 10:00-10:30AM Eastern EBMonFHIR Implementation Guide introductory  Information Session
              • Wednesday
                • 6:00-7:30AM Eastern  Biomedical Research and Regulation (BRR) Q2 Clinical research
              • Thursday
                • 8:45-10AM Eastern Learning Health Systems (LHS) Q3 EBMonFHIR update

          ·       Project Management Updates:

            • On January 22, the Project Management Working Group discussed FHIR change requests and EBMonFHIR Implementation Guide ballot comments. 
            • On January 26, The Project Management Working Group created an agenda for the following week.

          ·       Communications Working Group Updates:

            • On January 24, the Communications Working Group discussed submitting our SEVCO Study Design Terminology paper to the International Journal of Epidemiology https://academic.oup.com/ije/pages/General_Instructions and planned the presentation that will be given as an HL7 Working Group Meeting Education Session next week. The topic of the session will be the EBMonFHIR Implementation Guide.  

          ·       Scientific Knowledge Accelerator Foundation Updates:

            • On Tuesday January 23, the SKAF Board of Directors met and held elections to renew the terms of existing board members, they also discussed the objectives of the organization and changed the wording of the bylaws regarding meeting times. 

          ·       Scientific Evidence Code System (SEVCO) Updates:

          On January 22, the Statistic Terminology Working Group-reviewed the progress of voting on the 8 terms that have been out for vote and found that three of the terms had received enough votes to be approved (standard error of the mean, Standard error of the proportion, mean calibration) leaving 5 terms which, although none have opposing votes, do not have enough votes to be approved. (calibration intercept, calibration slope, mean calibration, standard error of the difference between independent means , standard error of the difference between independent proportions, measure of discrimination).  There are currently 6 terms open for vote.

          Term

          Definition

          Alternative Terms

          Comment for application

          calibration intercept

          A measure of calibration that is the difference between the mean expected value and the mean observed value.

          calibration-in-the-large

          The calibration intercept is computed from a statistical model for calibration of binary variables (0 or 1) where the log odds of the predicted probabilities is a linear function of the empirical frequencies. The notion of calibration in the large is that the intercept is a gross assessment of whether the average prediction matches the average outcome, however, this interpretation is exquisitely sensitive to the choice of referent factors within the prediction model.

          calibration slope

          A measure of calibration computed from the rate of change in observed value per unit change in the predicted value.

          calibration-in-the small

          The calibration slope is computed from a statistical model for calibration of binary variables (0 or 1) where the log odds of the predicted probabilities is a linear function of the empirical frequencies. 1 represents a perfect calibration, 0 represents lack of correlation. Inspection of the graph allows identification of areas of under and over confidence.

          standard error of the difference between independent means

          A measure of dispersion applied to differences between means of independent groups across hypothetical repeated random samples.

          <See standard error of the difference between independent means for the comment for application which includes the formula.>

          standard error of the difference between independent proportions

          A measure of dispersion applied to differences between proportions arising from independent groups across hypothetical repeated random samples.

          <See standard error of the difference between independent proportions for the comment for application which includes the formula.>

          measure of discrimination

          A statistic that quantifies the degree to which a classifier can distinguish among two or more groups..

          A classifier is a rule, formula, algorithm, or procedure used to label an instance based on its attributes.

          To participate you can join the Scientific Evidence Code System (SEVCO) Expert Working Group at https://fevir.net/resources/Project/27845.

          The group also discussed entering additional data into FHIR for an actual research study Telemedicine for Depression in Primary Care - Power analysis [Database Entry: FHIR Evidence Resource]. Contributors: Harold Lehmann [Authors/Creators]. In: Fast Evidence Interoperability Resources (FEvIR) Platform, FOI 191597. Revised 2024-01-09. Available at: https://fevir.net/resources/Evidence/191597.

            • Risk of Bias Terminology Working Group Updates:

          The SEVCO Expert Working Group has approved 390 of 603 (64.68%) terms so far. There are currently 10 Risk of Bias terms, and 5 Statistic terms open for voting.  

           

          Today the Risk of Bias Terminology Working Group found that the previous 6 terms open for vote did not receive enough votes for approval  (incoherence among data, analysis, and interpretationmixed methods research biasbias in mixed methods research designineffective integration of qualitative and quantitative study componentsinappropriate interpretation of integration of qualitative and quantitative findingsinadequate handling of inconsistency between qualitative and quantitative findings).

           

          ·       HL7 Standards Development Updates

            • HL7 Connectathon/WGM Updates: The January 2024 HL7 Working Group Meeting (WGM) will be held January 29- February 2, 2024
            • FHIR Specification Updates:
              • On January 22,the HL7 Learning Health Systems (LHS) working group Reviewed Connectathon achievements of the  EBMonFHIR track, Discussed CPG ballot comments and created a Jira ticket to suggest reconciling overlaps between CPG and EBMonFHIR implementation guides. 
              • On January 23,the HL7® Biomedical Research and Regulation (BRR) Work Group discussed HL7 Connectathon highlights from last week, HL7 Working Group Meeting plans for next week, and resolved   to create a new backbone element for ResearchSubject Resource. 
              • On January 24, the HL7 Clinical Decision Support Working Group revised the schedule for the upcoming HL7 Working Group Meeting and discussed implications of the ONC HTI-1 regulations on the CDS industry. More information can be found here Federalregister.gov/documents/2024/01/09/2023-28857/health-data-technology-and-interoperability-certification-program-updates-algorithm-transparency-and
              • On January 25, the HL7 Biomedical Research and Regulation (BRR)  Work Group

          continued working on a confluence page to help with expressing eligibility criteria using the FHIR Group Resource, the link to this page is below:

           Expressing Eligibility Criteria using the Group (R6) Resource - Reference Page

          o   EBM Implementation Guide Working Group Updates:

              •  On January 25, the EBM Implementation Guide Working Group (HL7 CDS EBMonFHIR sub-WG) 

          was the first GIN Tech Working Group Meeting since Dr. Brian S. Alper became Chair of this group. Members introduced themselves and got an introduction to the relationship between GINtech and EBMonFHIR which ultimately lead to the Health Evidence Knowledge Accelerator (HEvKA) project. 

          As its first effort, the group decided to propose a workshop for the Global Evidence Summit in Prague in September 2024. Proposals are due February 21, 2024

          A sub group will meet Wednesday February 10, 2024, at 8am Eastern time, in the HEvKa meeting room to create a plan for this workshop.

          A proposed title for the workshop is "What is Needed to Make Guidelines Computable? : A Consensus Development Exercise". The activity of the workshop would be to answer the question, the ACCORD concept could be used. This may also be the title of a resulting publication which could be submitted to the new GIN Journal.

          o   StatisticsOnFHIR Working Group Updates:

              • On January 23, the StatisticsOnFHIR Working Group (HL7 CDS EBMonFHIR sub-WG) continued progress on a ResearchStudy example,  https://fevir.net/resources/project/195718 and discussed the future of the journal article representing our SEVCO Study Design Code System. 

          ·       FEvIR Platform (and Tools) Development Updates

            • Computable EBM Tools Development Working Group Updates:
              • On January 25, the Computable EBM Tools Development Working Group worked to develop tooling for the GRADEpro to FHIR software on the FEvIR Platform
            • CQL Development Working Group Updates:
              • On January 22, the CQL Development Working Group (HL7 CDS EBMonFHIR sub-WG) discussed the future objectives and priorities for the group including possible collaboration with NIH entities. 
            • Setting the Scientific Record on FHIR Working Group Updates:
              • On January 22, the Setting the Scientific Record on FHIR Working Group discussed FHIR change requests and EBMonFHIR Implementation Guide ballot comments. 

          ·       Knowledge Ecosystem Liaison/Coordination Updates

            • Funding the Ecosystem Infrastructure Working Group Updates:
              • On January 24, the Funding the Knowledge Ecosystem Infrastructure Working Group discussed potential funding sources.
            • Ontology Management Working Group Updates:
              • On January 23, the Ontology Management Working Group continued progress on a ResearchStudy example,  https://fevir.net/resources/project/195718 and discussed the future of the journal article representing our SEVCO Study Design Code System. 
            • GRADE Ontology Working Group Updates:
              • On January 26, the GRADE Ontology Working Group found that the term (Risk of Bias) had enough votes to be approved and added to the code system. The group then worked on definitions and comments for application for two “child terms” of risk of bias (Risk of bias across studies, Risk of bias within a study) and these terms are open for voting. The group then defined a third term (Inconsistency).There are currently 3 terms open for voting.

          The meeting was recorded, and the recording can be viewed here. There are currently 51 members of the HEvKA GRADE Ontology Working Group.

           

           Term

          Definition

          Alternative Terms

          (if any)

          Comment for Application (if any)

          Risk of bias across studies

          The potential for systematic error in results or findings across studies due to limitations in their design, conduct, or reporting.

          In the definition of risk of bias, the "potential for" covers the likelihood of and the magnitude of systematic error. A "systematic error" is a difference between the reported results of a study or studies (findings, conclusions, or effect estimates) and the actuality (the truth, the estimand, or the true value targeted for estimation). The systematic error may occur at any stage in the conception and design of a study or in the collection, analysis, interpretation, or reporting of data.

          Within the GRADE Certainty of Evidence framework, the term "risk of bias" exclusively concerns limitations in the internal validity of the body of evidence for a single outcome. Related terms used by others for "risk of bias" include ‘‘internal validity problems’, ‘‘study limitations’’, and "methodological limitations".

          Risk of bias is one of the domains that can impact the rating of the certainty of evidence. The potential for systematic error from other sources, such as external validity (indirectness), publication bias, and inconsistency, are dealt with separately.

          For computer applications (i.e. communication via machine-interpretable code), the term "risk of bias" is not encouraged. Instead users should either use one of the more specific codes "Risk of bias across studies" or "Risk of bias within a study" to more precisely represent the context of the risk of bias judgements being coded.

          Risk of bias within a study

          The potential for systematic error in the results or findings from a single study due to limitations in its design, conduct, or reporting.

          In the definition of risk of bias, the "potential for" covers the likelihood of and the magnitude of systematic error. A "systematic error" is a difference between the reported results of a study or studies (findings, conclusions, or effect estimates) and the actuality (the truth, the estimand, or the true value targeted for estimation). The systematic error may occur at any stage in the conception and design of a study or in the collection, analysis, interpretation, or reporting of data.

          Related terms used by others for "risk of bias" include ‘‘internal validity problems’, ‘‘study limitations’’, and "methodological limitations".

          For computer applications (i.e. communication via machine-interpretable code), the term "risk of bias" is not encouraged. Instead users should either use one of the more specific codes "Risk of bias across studies" or "Risk of bias within a study" to more precisely represent the context of the risk of bias judgements being coded.

          Inconsistency

          Variations in the findings that were considered to estimate the effect.

          Variations may include differences across estimates from different studies or may include differences across estimates from different analyses (such as sensitivity analyses) within studies. Variations may include differences in the magnitude or direction of the findings. Criteria for evaluating inconsistency include (but are not limited to) similarity of point estimates, extent of overlap of confidence intervals, and statistical criteria including tests of heterogeneity, I2 or Tau2. Inconsistency is one of the domains that can impact the rating of the certainty of evidence. Related terms used by others for "inconsistency" include "heterogeneity", "statistical heterogeneity", and "clinical heterogeneity". Although any variations may be considered inconsistency, rating down the certainty of evidence due to inconsistency implies large, unexplained, or unjustified inconsistency.

           

          Quotes for thought: 

          "Life is a long lesson in humility” – James M. Barrie

          " We don't make mistakes; we have happy accidents." --Bob Ross

          "The chief function of the body is to carry the brain around." --Thomas A. Edison

          "You don't have to see the whole staircase, just take the first step." --Dr. Martin Luther King Jr.

          “Outside of a dog, a book is man’s best friend. Inside of a dog, it’s too dark to read.”—Groucho Marx

          Joanne Dehnbostel

          unread,
          Jan 31, 2024, 7:11:40 AM1/31/24
          to Health Evidence Knowledge Accelerator (HEvKA)

           

          5 people (BA, HL, JD, KS, KW) participated today in up to 2 active working group meetings.

          Today the Measuring the Rate of Scientific Knowledge Transfer working group was cancelled to allow participation in the January 2024 HL7 Working Group Meeting.

          The StatisticsOnFHIR Working Group (HL7 CDS EBMonFHIR sub-WG) and the Ontology Management Working Group continued to review progress on a ResearchStudy example https://fevir.net/resources/project/195718,  https://fevir.net/resources/195732.

          The group also discussed how to represent Time from Event studies and added a comment to a Jira ticket on this subject https://jira.hl7.org/browse/FHIR-43956 .

          January 2024 HL7 Working Group Meeting

          • The Clinical Decision Support (CDS) working group hosted a review of the EBMonFHIR project by Dr. Brian S. Alper
          • The EBMonFHIR Implementation Guide Introductory Information Session was attended by 41 people, slides from the presentation can be viewed here

          Quote for thought: "Nothing is impossible, the word itself says 'I'm Possible'!" --Audrey Hepburn

          Joanne Dehnbostel

          unread,
          Feb 1, 2024, 4:23:29 AM2/1/24
          to Health Evidence Knowledge Accelerator (HEvKA)

           

           

           

          5 people (BA, CE , JD, KS, MA) participated today in up to 2 active working group meetings.

          34 people (AI, BA, BS, CA-D, CE , CM, CW, ER, GL, HK, HL, HS, IK, IK , IM, JB, JD, JO, JT, KP, KR, KS, KW, LW, MA, MH, PW, RC, SL, SM, SS, TD, TL, VK) from 13 countries (Belgium, Brazil, Canada, Chile/Spain, China, Finland, Germany, Italy, Norway, Peru, Portugal, UK, USA) participated this month in up to 48 active working group meetings.

          Today the Funding the Knowledge Ecosystem Infrastructure Working Group and Communications Working Group prepared draft proposals for Global Evidence Summit2 which will be discussed next week. The group also provided responses to reviewer comments for a commentary “Making Guidelines Computable” which has been submitted to the journal, Clinical and Public Health Guidelines.

          January 2024 HL7 Working Group Meeting

          The Biomedical Research and Regulation (BRR) working group hosted a review of the EBMonFHIR project by Dr. Brian S. Alper.

          Quote for thought: ”We must all learn not only not to fear change, but to embrace it enthusiastically and, perhaps even more important, encourage and drive it.” –Tony Hsieh

          Joanne Dehnbostel

          unread,
          Feb 2, 2024, 3:01:43 AM2/2/24
          to Health Evidence Knowledge Accelerator (HEvKA)

           

           

           

           

           

           

          Today the EBM Implementation Guide Working Group (HL7 CDS EBMonFHIR sub-WG) and the Computable EBM Tools Development Working Group were cancelled to allow participation in the January 2024 HL7 Working Group Meeting.

          In the January 2024 HL7 Working Group Meeting:

          The Learning Health Systems (LHS) working group hosted a review of the EBMonFHIR project and EBMonFHIR Connectathon track report by Dr. Brian S. Alper. The group also discussed how to coordinate internally across the CPGonFHIR and EBMonFHIR groups and externally with MCBK and the ACT initiative.  

          Quote for thought: "It's kind of fun to do the impossible!" – Walt Disney

          Joanne Dehnbostel

          unread,
          Feb 3, 2024, 2:12:32 PM2/3/24
          to Health Evidence Knowledge Accelerator (HEvKA)

           

           

           

           

           

           

          12 people (BA, CA-D, HK, HL, JD, KS, KW, MA, PW, SM, TD, XS) participated in up to three active working group meetings. 

          The SEVCO Expert Working Group has approved 391 of 603 (64.84%) terms so far. There are currently 10 Risk of Bias terms, and 5 Statistic terms open for voting.  

           

          On February 2, 2024, the Risk of Bias Terminology Working Group found that of the 10 terms open for vote last week, one term, bias in study bias in study eligibility criteria, received enough votes to be approved and added to the SEVCO code system. two terms received negative votes, (study eligibility criteria not appropriate for review question and incoherence among data, analysis, and interpretation) these definitions were revised and the terms were re-opened for voting, seven terms did not have enough votes for approval ( study eligibility criteria not prespecified, study eligibility criteria ambiguous, mixed methods research bias, bias in mixed methods research design, ineffective integration of qualitative and quantitative study components,  inappropriate interpretation of integration of qualitative and quantitative findings,  inadequate handling of inconsistency between qualitative and quantitative findings). One new term (study eligibility criteria limits for study characteristics not appropriate) was also defined,  resulting in 10 terms currently open for voting.     

           

          Term

          Definition

          Alternative Terms
          (if any)

          Comment for application
          (if any)

          A bias in study eligibility criteria in which the criteria are not stated before the study selection process occurs.

          Failure to specify the study eligibility criteria before evaluating studies for selection can lead to a situation in which the data discovered during the study selection process influences the criteria for selection in a way that introduces systematic error.

          A bias in study eligibility criteria resulting from inclusion and exclusion criteria used to select studies for review that could result in a mismatch between the selected studies and the research question.

          The mismatch between the selected studies and the research question may relate to differences in the population, exposures, or outcomes studies from those of interest.

          study eligibility criteria ambiguous

          A bias in study eligibility criteria due to unclear specification.

          Eligibility criteria that are not sufficiently described to enable reproduction of study selection can introduce systematic error.

          study eligibility criteria limits for study characteristics not appropriate

          A bias in study eligibility criteria that is specific to restrictions based on study characteristics.

          Examples may include criteria based on study size, study design, study quality, or date of publication.

          A qualitative research bias in which there is any mismatch among hypothesis, data collected, data analysis, and results interpretation.

          The term mismatch applies to an inappropriate or wrong or inadequate relationship.

          mixed methods research bias

          A bias specific to the alignment of design, conduct, analysis or reporting of qualitative research and quantitative research within the same research project.

          Mixed methods research is a research approach that combines both qualitative and quantitative research methods within a single study or research project. This methodology aims to provide a more comprehensive understanding of a research problem by integrating the strengths of both qualitative and quantitative research. Examples of mixed methods research include combining surveys with in-depth interviews, using quantitative data to identify patterns and trends followed by qualitative data to explore the underlying reasons and meanings, or incorporating qualitative findings to help interpret and validate quantitative results. Overall, mixed methods research provides a more holistic understanding of a research question by acknowledging and leveraging the strengths of both qualitative and quantitative approaches.

          bias in mixed methods research design

          A mixed methods research bias in which the mixed methods approach used in a study is not appropriate for the research question and problem.

          • inadequate rationale for mixed methods design

          This signaling question in the Mixed Methods Assessment Tool (MMAT) is 5.1. Is there an adequate rationale for using a mixed methods design to address the research question?

          Common mixed methods designs include:

          Convergent design The QUAL and QUAN components are usually (but not necessarily) concomitant. The purpose is to examine the same phenomenon by interpreting QUAL and QUAN results (bringing data analysis together at the interpretation stage), or by integrating QUAL and QUAN datasets (e.g., data on same cases), or by transforming data (e.g., quantization of qualitative data).

          Sequential explanatory design Results of the phase 1 - QUAN component inform the phase 2 - QUAL component. The purpose is to explain QUAN results using QUAL findings. E.g., the QUAN results guide the selection of QUAL data sources and data collection, and the QUAL findings contribute to the interpretation of QUAN results.

          Sequential exploratory design Results of the phase 1 - QUAL component inform the phase 2 - QUAN component. The purpose is to explore, develop and test an instrument (or taxonomy), or a conceptual framework (or theoretical model). E.g., the QUAL findings inform the QUAN data collection, and the QUAN results allow a statistical generalization of the QUAL findings.

          Key references: Creswell et al. (2011); Creswell and Plano Clark, (2017); O'Cathain (2010)

          Reference - Hong QN, Pluye P, Fàbregues S, Bartlett G, Boardman F, Cargo M, Dagenais P, Gagnon M-P, Griffiths F, Nicolau B, O’Cathain A, Rousseau M-C, Vedel I. Mixed Methods Appraisal Tool (MMAT), version 2018. Registration of Copyright (#1148552), Canadian Intellectual Property Office, Industry Canada. Accessed at: http://mixedmethodsappraisaltoolpublic.pbworks.com/w/file/fetch/127916259/MMAT_2018_criteria-manual_2018-08-01_ENG.pdf

          ineffective integration of qualitative and quantitative study components

          A mixed methods research bias in which the qualitative research and quantitative research components are not adequately combined.

          inappropriate interpretation of integration of qualitative and quantitative findings

          A mixed methods research bias in which the process of combining the results of the constituent analyses is flawed.

          This criterion is related to meta-inference, which is defined as the overall interpretations derived from integrating qualitative and quantitative findings (Teddlie and Tashakkori, 2009). Meta-inference occurs during the interpretation of the findings from the integration of the qualitative and quantitative components, and shows the added value of conducting a mixed methods study rather than having two separate studies. (Pluye et al 2018)

          inadequate handling of inconsistency between qualitative and quantitative findings

          A mixed methods research bias in which discrepancies in the results from the qualitative and quantitative components are not adequately addressed.

          When integrating the findings from the qualitative and quantitative components, divergences and inconsistencies (also called conflicts, contradictions, discordances, discrepancies, and dissonances) can be found. It is not sufficient to only report the divergences; they need to be explained. Different strategies to address the divergences have been suggested such as reconciliation, initiation, bracketing and exclusion (Pluye et al., 2009b). (Reference - Hong QN, Pluye P, Fàbregues S, Bartlett G, Boardman F, Cargo M, Dagenais P, Gagnon M-P, Griffiths F, Nicolau B, O’Cathain A, Rousseau M-C, Vedel I. Mixed Methods Appraisal Tool (MMAT), version 2018. Registration of Copyright (#1148552), Canadian Intellectual Property Office, Industry Canada. Accessed at: http://mixedmethodsappraisaltoolpublic.pbworks.com/w/file/fetch/127916259/MMAT_2018_criteria-manual_2018-08-01_ENG.pdf)

          GRADE Ontology Working Group There were three terms open for vote last week. Two of those terms, (Risk of bias across studies, Risk of bias within a study) passed with a vote of 9-0. The group then continued to discuss the third term , Inconsistency,  which received some negative votes and several comments (10 people voted and the vote was 5-5). During this discussion the definition evolved and there are still comments to discuss. The term remains open for voting so that additional comments and votes can be added. If you have already voted on this term,  you can change your vote but there is no need to vote again if you are happy with your original vote.  The February 2, 2024 GRADE Ontology meeting was recorded and the recording can be viewed here.

          Term

          Definition

          Alternative Terms
          (if any)

          Comment for application
          (if any)

          Inconsistency

          Variation in findings among studies.

          Variations may include differences across estimates from different studies or may include differences across estimates from different analyses (such as sensitivity analyses) within studies. Variations may include differences in the magnitude or direction of the findings. Criteria for evaluating inconsistency include (but are not limited to) similarity of point estimates, extent of overlap of confidence intervals, and statistical criteria including tests of heterogeneity, I2 or Tau2. Inconsistency is one of the domains that can impact the rating of the certainty of evidence. Related terms used by others for "inconsistency" include "heterogeneity", "statistical heterogeneity", and "clinical heterogeneity". Although any variations may be considered inconsistency, rating down the certainty of evidence due to inconsistency implies large, unexplained, or unjustified inconsistency.

           

          The Project Management Working Group prepared a proposed weekly agenda:

          Day/Time (Eastern)

          Working Group

          Agenda Items

          Monday 8-9 am

          Project Management

          FHIR changes and EBMonFHIR Implementation Guide issues

          Monday 9-10 am

          Setting the Scientific Record on FHIR

          Review objectives and priorities, development of GRADEpro-to-FEvIR Converter

          Monday 10-11 am

          CQL Development (a CDS EBMonFHIR sub-WG)

          Determine objectives and priorities

          Monday 2-3 pm

          Statistic Terminology

          SEVCO terms for measures of dispersion and calibration (5 terms open for vote)

          Tuesday 9 am-10 am

          Measuring the Rate of Scientific Knowledge Transfer

          Review changes for overall project support (SKAF Board last Tuesday of each month)

          Tuesday 2-3 pm

          StatisticsOnFHIR (a CDS EBMonFHIR sub-WG)

          Review objectives and priorities, review progress on ResearchStudy and Evidence examples

          Tuesday 3-4 pm

          Ontology Management

          Review objectives and priorities

          Wednesday 8-9 am

          Funding the Ecosystem Infrastructure

          GINTech proposal for Global Evidence Summit 2

          Wednesday 9-10 am

          Communications (Awareness, Scholarly Publications)

          Publications, Presentations (Global Evidence Summit 2), Website

          Thursday 8-9 am

          EBM Implementation Guide (a CDS EBMonFHIR sub-WG)

          Review IG Ballot feedback, Prepare EBMonFHIR IG CodeSystem for HL7 terminology

          Thursday 9-10 am

          Computable EBM Tools Development

          Review objectives and priorities

          Friday 9-10 am

          Risk of Bias Terminology

          Review SEVCO terms for qualitative research bias and study selection bias (10 terms open for vote)

          Friday 10-11 am

          GRADE Ontology

          Term development review Inconsistency (1 term open for vote)

          Friday 12-1 pm

          Project Management

          Prepare Weekly Agenda

           

          Quote for Thought: 

          "Phil: Do you ever have déjà vu, Mrs. Lancaster?"

          "Mrs. Lancaster: I don't think so, but I could check with the kitchen." – From the movie "Groundhog Day”

          Joanne Dehnbostel

          unread,
          Feb 5, 2024, 3:40:47 AM2/5/24
          to Health Evidence Knowledge Accelerator (HEvKA) Weekly Update, Health Evidence Knowledge Accelerator (HEvKA)

           

           

           

          Project Coordination Updates:

          13 people (BA, CA-D, CE , HK, HL, JD, KS, KW, MA, PW, SM, TD, XS) from 6 countries (Belgium, Canada, Norway, Peru, UK, USA) participated this week in up to 8 active working group meetings. 6 Regular HEvKA Meetings were cancelled to allow for participation in the HL7 January 2024 Working Group Meeting (WGM)

          34 people (AI, BA, BS, CA-D, CE , CM, CW, ER, GL, HK, HL, HS, IK, IK , IM, JB, JD, JO, JT, KP, KR, KS, KW, LW, MA, MH, PW, RC, SL, SM, SS, TD, TL, VK) from 13 countries (Belgium, Brazil, Canada, Chile/Spain, China, Finland, Germany, Italy, Norway, Peru, Portugal, UK, USA) participated in up to 48 active working group meetings, in the month of January.

          On February 2, the Project Management Working Group prepared a proposed weekly agenda for the week of February 5-February 9:

          Day/Time (Eastern)

          Working Group

          Agenda Items

          Monday 8-9 am

          Project Management

          FHIR changes and EBMonFHIR Implementation Guide issues

          Monday 9-10 am

          Setting the Scientific Record on FHIR

          Review objectives and priorities, development of GRADEpro-to-FEvIR Converter

          Monday 10-11 am

          CQL Development (a CDS EBMonFHIR sub-WG)

          Determine objectives and priorities

          Monday 2-3 pm

          Statistic Terminology

          SEVCO terms for measures of dispersion and calibration (5 terms open for vote)

          Tuesday 9 am-10 am

          Measuring the Rate of Scientific Knowledge Transfer

          Review changes for overall project support (SKAF Board last Tuesday of each month)

          Tuesday 2-3 pm

          StatisticsOnFHIR (a CDS EBMonFHIR sub-WG)

          Review objectives and priorities, review progress on ResearchStudy and Evidence examples

          Tuesday 3-4 pm

          Ontology Management

          Review objectives and priorities

          Wednesday 8-9 am

          Funding the Ecosystem Infrastructure

          GINTech proposal for Global Evidence Summit 2

          Wednesday 9-10 am

          Communications (Awareness, Scholarly Publications)

          Publications, Presentations (Global Evidence Summit 2), Website

          Thursday 8-9 am

          EBM Implementation Guide (a CDS EBMonFHIR sub-WG)

          Review IG Ballot feedback, Prepare EBMonFHIR IG CodeSystem for HL7 terminology

          Thursday 9-10 am

          Computable EBM Tools Development

          Review objectives and priorities

          Friday 9-10 am

          Risk of Bias Terminology

          Review SEVCO terms for qualitative research bias and study selection bias (10 terms open for vote)

          Friday 10-11 am

          GRADE Ontology

          Term development review Inconsistency (1 term open for vote)

          Friday 12-1 pm

          Project Management

          Prepare Weekly Agenda

           

          On January 31, the Communications Working Group prepared draft proposals for Global Evidence Summit2 which will be discussed next week. The group also provided responses to reviewer comments for a commentary “Making Guidelines Computable” which has been submitted to the journal, Clinical and Public Health Guidelines.

          Scientific Evidence Code System Updates:

          The SEVCO Expert Working Group has approved 391 of 603 (64.84%) terms so far. There are currently 10 Risk of Bias terms, and 5 Statistic terms open for voting.  

          On January 29, the Statistic Terminology Working Group found that all 5 of the terms open for vote last week received only 3 votes, so none of the terms have enough votes to be approved for the code system.

          However, the term mean calibration, open for voting the previous week, did receive enough votes for approval and was incorrectly reported last week as still open for vote. I apologize for any confusion this may have caused. 

          The 5 statistic terms currently open for voting are listed below:

           

          Term

          Definition

          Alternative Terms

          Comment for application

          calibration intercept

          A measure of calibration that is the difference between the mean expected value and the mean observed value.

          calibration-in-the-large

          The calibration intercept is computed from a statistical model for calibration of binary variables (0 or 1) where the log odds of the predicted probabilities is a linear function of the empirical frequencies. The notion of calibration in the large is that the intercept is a gross assessment of whether the average prediction matches the average outcome, however, this interpretation is exquisitely sensitive to the choice of referent factors within the prediction model.

          calibration slope

          A measure of calibration computed from the rate of change in observed value per unit change in the predicted value.

          calibration-in-the small

          The calibration slope is computed from a statistical model for calibration of binary variables (0 or 1) where the log odds of the predicted probabilities is a linear function of the empirical frequencies. 1 represents a perfect calibration, 0 represents lack of correlation. Inspection of the graph allows identification of areas of under and over confidence.

          standard error of the difference between independent means

          A measure of dispersion applied to differences between means of independent groups across hypothetical repeated random samples.

          <See standard error of the difference between independent means for the comment for application which includes the formula.>

          standard error of the difference between independent proportions

          A measure of dispersion applied to differences between proportions arising from independent groups across hypothetical repeated random samples.

          <See standard error of the difference between independent proportions for the comment for application which includes the formula.>

          measure of discrimination

          A statistic that quantifies the degree to which a classifier can distinguish among two or more groups..

          A classifier is a rule, formula, algorithm, or procedure used to label an instance based on its attributes.

           

          The Statistic Terminology Working Group then continued to work on the application of study eligibility criteria in a real research study example using Group Resource. https://fevir.net/resources/Group/183488 .

           

          On February 2, 2024, the Risk of Bias Terminology Working Group found that of the 10 terms open for vote last week, one term, bias in study bias in study eligibility criteria, received enough votes to be approved and added to the SEVCO code system. two terms received negative votes, (study eligibility criteria not appropriate for review question and incoherence among data, analysis, and interpretation) these definitions were revised and the terms were re-opened for voting, seven terms did not have enough votes for approval ( study eligibility criteria not prespecified, study eligibility criteria ambiguous, mixed methods research bias, bias in mixed methods research design, ineffective integration of qualitative and quantitative study components,  inappropriate interpretation of integration of qualitative and quantitative findings,  inadequate handling of inconsistency between qualitative and quantitative findings). One new term (study eligibility criteria limits for study characteristics not appropriate) was also defined,  resulting in 10 terms currently open for voting.     

           

          Term

          Definition

          Alternative Terms
          (if any)

          Comment for application
          (if any)

          A bias in study eligibility criteria in which the criteria are not stated before the study selection process occurs.

          Failure to specify the study eligibility criteria before evaluating studies for selection can lead to a situation in which the data discovered during the study selection process influences the criteria for selection in a way that introduces systematic error.

          A bias in study eligibility criteria resulting from inclusion and exclusion criteria used to select studies for review that could result in a mismatch between the selected studies and the research question.

          The mismatch between the selected studies and the research question may relate to differences in the population, exposures, or outcomes studies from those of interest.

          study eligibility criteria ambiguous

          A bias in study eligibility criteria due to unclear specification.

          Eligibility criteria that are not sufficiently described to enable reproduction of study selection can introduce systematic error.

          study eligibility criteria limits for study characteristics not appropriate

          A bias in study eligibility criteria that is specific to restrictions based on study characteristics.

          Examples may include criteria based on study size, study design, study quality, or date of publication.

          incoherence among data, analysis, and interpretation

          A qualitative research bias in which there is any mismatch among hypothesis, data collected, data analysis, and results interpretation.

          The term mismatch applies to an inappropriate or wrong or inadequate relationship.

          mixed methods research bias

          A bias specific to the alignment of design, conduct, analysis or reporting of qualitative research and quantitative research within the same research project.

          Mixed methods research is a research approach that combines both qualitative and quantitative research methods within a single study or research project. This methodology aims to provide a more comprehensive understanding of a research problem by integrating the strengths of both qualitative and quantitative research. Examples of mixed methods research include combining surveys with in-depth interviews, using quantitative data to identify patterns and trends followed by qualitative data to explore the underlying reasons and meanings, or incorporating qualitative findings to help interpret and validate quantitative results. Overall, mixed methods research provides a more holistic understanding of a research question by acknowledging and leveraging the strengths of both qualitative and quantitative approaches.

          bias in mixed methods research design

          A mixed methods research bias in which the mixed methods approach used in a study is not appropriate for the research question and problem.

          • inadequate rationale for mixed methods design

          This signaling question in the Mixed Methods Assessment Tool (MMAT) is 5.1. Is there an adequate rationale for using a mixed methods design to address the research question?

          Common mixed methods designs include:

          Convergent design The QUAL and QUAN components are usually (but not necessarily) concomitant. The purpose is to examine the same phenomenon by interpreting QUAL and QUAN results (bringing data analysis together at the interpretation stage), or by integrating QUAL and QUAN datasets (e.g., data on same cases), or by transforming data (e.g., quantization of qualitative data).

          Sequential explanatory design Results of the phase 1 - QUAN component inform the phase 2 - QUAL component. The purpose is to explain QUAN results using QUAL findings. E.g., the QUAN results guide the selection of QUAL data sources and data collection, and the QUAL findings contribute to the interpretation of QUAN results.

          Sequential exploratory design Results of the phase 1 - QUAL component inform the phase 2 - QUAN component. The purpose is to explore, develop and test an instrument (or taxonomy), or a conceptual framework (or theoretical model). E.g., the QUAL findings inform the QUAN data collection, and the QUAN results allow a statistical generalization of the QUAL findings.

          Key references: Creswell et al. (2011); Creswell and Plano Clark, (2017); O'Cathain (2010)

          Reference - Hong QN, Pluye P, Fàbregues S, Bartlett G, Boardman F, Cargo M, Dagenais P, Gagnon M-P, Griffiths F, Nicolau B, O’Cathain A, Rousseau M-C, Vedel I. Mixed Methods Appraisal Tool (MMAT), version 2018. Registration of Copyright (#1148552), Canadian Intellectual Property Office, Industry Canada. Accessed at: http://mixedmethodsappraisaltoolpublic.pbworks.com/w/file/fetch/127916259/MMAT_2018_criteria-manual_2018-08-01_ENG.pdf

          ineffective integration of qualitative and quantitative study components

          A mixed methods research bias in which the qualitative research and quantitative research components are not adequately combined.

          inappropriate interpretation of integration of qualitative and quantitative findings

          A mixed methods research bias in which the process of combining the results of the constituent analyses is flawed.

          This criterion is related to meta-inference, which is defined as the overall interpretations derived from integrating qualitative and quantitative findings (Teddlie and Tashakkori, 2009). Meta-inference occurs during the interpretation of the findings from the integration of the qualitative and quantitative components, and shows the added value of conducting a mixed methods study rather than having two separate studies. (Pluye et al 2018)

          inadequate handling of inconsistency between qualitative and quantitative findings

          A mixed methods research bias in which discrepancies in the results from the qualitative and quantitative components are not adequately addressed.

          When integrating the findings from the qualitative and quantitative components, divergences and inconsistencies (also called conflicts, contradictions, discordances, discrepancies, and dissonances) can be found. It is not sufficient to only report the divergences; they need to be explained. Different strategies to address the divergences have been suggested such as reconciliation, initiation, bracketing and exclusion (Pluye et al., 2009b). (Reference - Hong QN, Pluye P, Fàbregues S, Bartlett G, Boardman F, Cargo M, Dagenais P, Gagnon M-P, Griffiths F, Nicolau B, O’Cathain A, Rousseau M-C, Vedel I. Mixed Methods Appraisal Tool (MMAT), version 2018. Registration of Copyright (#1148552), Canadian Intellectual Property Office, Industry Canada. Accessed at: http://mixedmethodsappraisaltoolpublic.pbworks.com/w/file/fetch/127916259/MMAT_2018_criteria-manual_2018-08-01_ENG.pdf)

           

          HL7 Standards Development Updates:

          On January 30, the StatisticsOnFHIR Working Group (HL7 CDS EBMonFHIR sub-WG) continued to review progress on a ResearchStudy example https://fevir.net/resources/project/195718,  https://fevir.net/resources/195732.

          The group also discussed how to represent Time from Event studies and added a comment to a Jira ticket on this subject https://jira.hl7.org/browse/FHIR-43956 .

           

          January 2024 HL7 Working Group Meeting

          On January 29, two HL7 Working Group Meeting attendees joined us in a virtual "meetup" for an Introduction to the FEvIR Platform. 

          On January 29, the Biomedical Research and Regulation Working Group Monday Q3-The group worked through Jira tickets regarding the ResearchSubject Resource. 

          On January 30, The Clinical Decision Support (CDS) working group hosted a review of the EBMonFHIR project by Dr. Brian S. Alper

          On January 30, The EBMonFHIR Implementation Guide Introductory Information Session was attended by 41 people, slides from the presentation can be viewed here

          On January 31,the Biomedical Research and Regulation (BRR) working group hosted a review of the EBMonFHIR project by Dr. Brian S. Alper.

          On February 1, the Learning Health Systems (LHS) working group hosted a review of the EBMonFHIR project and EBMonFHIR Connectathon track report by Dr. Brian S. Alper. The group also discussed how to coordinate internally across the CPGonFHIR and EBMonFHIR groups and externally with MCBK and the ACT initiative.

          Knowledge Ecosystem Liaison/Coordination Updates:

          On January 31, the Funding the Knowledge Ecosystem Infrastructure Working Group prepared draft proposals for Global Evidence Summit2 which will be discussed next week. The group also provided responses to reviewer comments for a commentary “Making Guidelines Computable” which has been submitted to the journal, Clinical and Public Health Guidelines.

          On January 30, the Ontology Management Working Group continued to review progress on a ResearchStudy example https://fevir.net/resources/project/195718,  https://fevir.net/resources/195732.

          The group also discussed how to represent Time from Event studies and added a comment to a Jira ticket on this subject https://jira.hl7.org/browse/FHIR-43956 .

           

          On February 2, the GRADE Ontology Working Group found that of the three terms open for vote last week, two terms, (Risk of bias across studies, Risk of bias within a study) passed with a vote of 9-0. The group then discussed the third term , Inconsistency,  which received some negative votes and several comments (10 people voted and the vote was 5-5). During this discussion the definition evolved and there are still comments to discuss. The term remains open for voting so that additional comments and votes can be added. If you have already voted on this term,  you can change your vote but there is no need to vote again if you are happy with your original vote.  The February 2, 2024 GRADE Ontology meeting was recorded and the recording can be viewed here.

          Term

          Definition

          Alternative Terms
          (if any)

          Comment for application
          (if any)

          Inconsistency

          Variation in findings among studies.

          Variations may include differences across estimates from different studies or may include differences across estimates from different analyses (such as sensitivity analyses) within studies. Variations may include differences in the magnitude or direction of the findings. Criteria for evaluating inconsistency include (but are not limited to) similarity of point estimates, extent of overlap of confidence intervals, and statistical criteria including tests of heterogeneity, I2 or Tau2. Inconsistency is one of the domains that can impact the rating of the certainty of evidence. Related terms used by others for "inconsistency" include "heterogeneity", "statistical heterogeneity", and "clinical heterogeneity". Although any variations may be considered inconsistency, rating down the certainty of evidence due to inconsistency implies large, unexplained, or unjustified inconsistency.

           

          Quotes for Thought: 

          "Happiness is not achieved by the conscious pursuit of happiness; it is generally the by-product of other activities."–Aldous Huxley

          "Nothing is impossible, the word itself says 'I'm Possible'!" --Audrey Hepburn

          ”We must all learn not only not to fear change, but to embrace it enthusiastically and, perhaps even more important, encourage and drive it.” –Tony Hsieh

          "It's kind of fun to do the impossible!" – Walt Disney

          "Phil: Do you ever have déjà vu, Mrs. Lancaster?"

          "Mrs. Lancaster: I don't think so, but I could check with the kitchen." – From the movie "Groundhog Day”

           

          To get involved or stay informed: HEvKA Project Page on FEvIR Platform, HEvKA Project Page on HL7 Confluence, or join any of the groups that are now meeting in the following weekly schedule:

          Joanne Dehnbostel

          unread,
          Feb 6, 2024, 9:05:20 AM2/6/24
          to Health Evidence Knowledge Accelerator (HEvKA)

           

           

           

           

           

           

           

           

           

          7 people (BA, CE , JD, JJ, KS, KW, RL) participated in up to four active working group meetings. 

          Today the Project Management Working Group worked to resolve several FHIR trackers and added a request for approval of this UTG Change Proposal

           UP-427 - Add 3 terms to http://terminology.hl7.org/CodeSystem/variable-role ENVIRONMENT SETUP 

          to the agenda of the next HL7 Clinical Decision Support (CDS) Meeting. 

          The Setting the Scientific Record on FHIR Working Group met with representatives from the Systematic Review data Repository (SRDR) and discussed changes in the R6 version of FHIR, especially as they pertain to Group and EvidenceVariable Resources.

          The CQL Development Working Group discussed how Clinical Quality Language (CQL) could be used to describe eligibility criteria for a group of adolescents with obesity and created the following Group Resource StudyEligibilityCriteria: Obese patients 12-17 years old  which contains a completed CQL expression. The group went on to define a group who had obesity including any of a list of conditions ObesityConditions ValueSet except for generalized or morbid obesity, and will continue to work to add CQL to  StudyEligibilityCriteria: Adolescents with non-syndromic obesity.

          The Statistic Terminology Working Group found that none of the 5 terms open for vote last week received enough votes to be approved for the code system. However, two of the terms had comments which were discussed in today’s meeting. This discussion resulted in the deprecation of the term measure of discrimination from the code system, and moving area under the curve directly under statistic in the code system hierarchy. The comment for application for calibration intercept was also modified. We will continue to discuss the definition for area under the curve next week.

          There are currently 4 Statistic terms open for voting: 

           

           

          Term

          Definition

          Alternative Terms

          Comment for application

          calibration intercept

          A measure of calibration that is the difference between the mean expected value and the mean observed value.

          calibration-in-the-large

          The calibration intercept is computed from a statistical model for calibration of binary variables (0 or 1) where the log odds of the predicted probabilities is a linear function of the empirical frequencies. The notion of calibration in the large is that the intercept is a gross assessment of whether the average prediction matches the average outcome, however, this interpretation is exquisitely sensitive to the choice of referent factors within the prediction model.

          calibration slope

          A measure of calibration computed from the rate of change in observed value per unit change in the predicted value.

          calibration-in-the small

          The calibration slope is computed from a statistical model for calibration of binary variables (0 or 1) where the log odds of the predicted probabilities is a linear function of the empirical frequencies. 1 represents a perfect calibration, 0 represents lack of correlation. Inspection of the graph allows identification of areas of under and over confidence.

          standard error of the difference between independent means

          A measure of dispersion applied to differences between means of independent groups across hypothetical repeated random samples.

          <See standard error of the difference between independent means for the comment for application which includes the formula.>

          standard error of the difference between independent proportions

          A measure of dispersion applied to differences between proportions arising from independent groups across hypothetical repeated random samples.

          <See standard error of the difference between independent proportions for the comment for application which includes the formula.>

           

          To participate you can join the Scientific Evidence Code System (SEVCO) Expert Working Group at https://fevir.net/resources/Project/27845

           

          Releases on the FEvIR Platform:

          The Fast Evidence Interoperability Resources (FEvIR) Platform is available for use now, but is “pre-release”.  The current version is 0.202.0 (February 5, 2024). Viewing resources is open without login.  Signing in is free and required to create content (which can then only be edited by the person who created the content).

          Release 0.202.0 (February 5, 2024), the bulk resource uploader limit increased from 100 to 500.

          The Computable Publishing®: RIS-to-FEvIR Converter version 0.13.0 (February 5, 2024) converts data with an RIS format into FEvIR Resources in FHIR Citation JSON.

          Release 0.13.0 (February 5, 2024) improved the user interface so it hides UI components from a previous step so it's more intuitive to use and the RIS text box doesn't overly increase the page height.

          The FEvIR®: RAte of Dissemination Assessment Research (RADAR) Tool Version 0.5.0 (February 5, 2024) facilitates research conduct and project management for an investigation to measure the rate of scientific knowledge transfer.

          Release 0.5.0 (February 5, 2024)added RIS Citation importing functionality

          Quote for Thought: "How can you know what you are capable of if you don't embrace the unknown?" – Esmeralda Santiago

          Joanne Dehnbostel

          unread,
          Feb 7, 2024, 8:02:26 AM2/7/24
          to Health Evidence Knowledge Accelerator (HEvKA)

           

           

          February 6

          8 people (BA, CE , HL, JD, KR, KS, KW, RC) participated in up to three active working group meetings. 

          The Measuring the Rate of Scientific Knowledge Transfer WG reviewed software updates on the FEvIR Platform that will support the next phase of the project and used the new software to load RIS files of articles that will be used for data entry. 

          The  StatisticsOnFHIR WG (a CDS EBMonFHIR sub-WG) and Ontology Management WG continued to work on a ResearchStudy example https://fevir.net/resources/composition/201561.

          Releases today on the FEvIR Platform:

          The Fast Evidence Interoperability Resources (FEvIR) Platform is available for use now but is “pre-release”.  The current version is 0.203.0 (February 6, 2024). Viewing resources is open without login.  Signing in is free and required to create content (which can then only be edited by the person who created the content).

          Release 0.203.0 (February 6, 2024) the bulk resource importer now supports up to 1,000 resources, up from 500. Also toast pop-ups now appear above modals. The look of the X close button now has a slightly improved appearance for modals and elsewhere.

          The FEvIR®: RAte of Dissemination Assessment Research (RADAR) Tool Version 0.6.0 (February 6, 2024) facilitates research conduct and project management for an investigation to measure the rate of scientific knowledge transfer.

          Release 0.6.0 (February 6, 2024) now autofills results in registry (yes or no) and the Date of Results Posted fields. If a user inputs an NCT ID in the study record where it loads the data from ClinicalTrials.gov. The import citation via PMID text field now removes tab characters that could be accidentally copied from a table. An additional Save button was added to save and close the modal, and the original Save button saves and keeps the modal open so the user can continue their work. The invite modal now lets the user know that they aren't logged in and displays a sign in button.

          Quote for thought: "Vitality shows in not only the ability to persist but the ability to start over."  --F. Scott Fitzgerald

          Joanne Dehnbostel

          unread,
          Feb 8, 2024, 4:14:15 AM2/8/24
          to Health Evidence Knowledge Accelerator (HEvKA)

           

           

          7 people (BA, CE , JD, JO, KR, KS, MA) participated in up to two active working group meetings. 

          Today, the Funding the Ecosystem Infrastructure Working Group created a GINTech workshop proposal titled "What is Needed to Make Guidelines Computable?: A consensus-development exercise" for Global Evidence Summit 2

          The Communications Working Group created a Scientific Knowledge Accelerator Foundation workshop proposal titled "Scientific Evidence Code System (SEVCO): a common language for communicating evidence" for Global Evidence Summit 2 and discussed creating a proposal for an additional oral presentation to introduce the Measuring the Rate of Scientific Knowledge Transfer project. 

          Releases today on the FEvIR Platform:

          The Fast Evidence Interoperability Resources (FEvIR) Platform is available for use now but is “pre-release”.  The current version is 0.204.0 (February 7, 2024). Viewing resources is open without login.  Signing in is free and required to create content (which can then only be edited by the person who created the content).

          Release 0.204.0 (February 7, 2024) certain alert modals can now be closed by clicking out or by pressing the ESC key, including the RADAR tool modals. On the create new resource page, when a resource is submitted without a title, a modal will pop up asking for a title instead of a web browser alert. 

          The FEvIR®: RAte of Dissemination Assessment Research (RADAR) Tool Version 0.7.0 (February 7, 2024) facilitates research conduct and project management for an investigation to measure the rate of scientific knowledge transfer.

          Release 0.7.0 (February 7, 2024) all alert and confirmation prompts now use a modal instead of the in-browser alerts. All modals can be closed by clicking out or by pressing the ESC key as long as no changes have been made, or the X button can be used to close out and the user will be asked for confirmation to discard changes.

          Quote for thought: "May the flowers remind us why the rain was so necessary." – Xan Oku

          Joanne Dehnbostel

          unread,
          Feb 9, 2024, 7:57:24 AM2/9/24
          to Health Evidence Knowledge Accelerator (HEvKA)

           

           

          5 people (BA, GL, IK, JD, KS) participated in up to two active working group meetings. 

          Today the EBM Implementation Guide Working Group (a CDS EBMonFHIR sub-WG) reviewed the EBMonFHIR Implementation Guide Ballot feedback and made improvements to the EBMonFHIR IG including the application of :

          https://jira.hl7.org/browse/FHIR-43457

          https://build.fhir.org/ig/HL7/ebm/StructureDefinition-comparative-evidence-report.html – section:researchStudy now has code for research-study, and entry Reference only ResearchStudy or Citation. 

          Headings and italics were added to improve the implementation guide as described herehttps://jira.hhttps://jira.hl7.org/browse/FHIR-44044 More detail may be added during the Monday Project Management meeting.

          The Computable EBM Tools Development Working Group submitted the completed GINTech workshop proposal titled "What is Needed to Make Guidelines Computable?: A consensus-development exercise" for Global Evidence Summit 2.

          The group also reviewed progress on development of the GRADEpro-to-FEvIR Converter Tool.

          Releases today on the FEvIR Platform:

          The FEvIR®: RAte of Dissemination Assessment Research (RADAR) Tool Version 0.7.1 (February 8, 2024) facilitates research conduct and project management for an investigation to measure the rate of scientific knowledge transfer

          Release 0.7.1 (February 8, 2024) when putting an NCT ID as the study record, the loaded Results First Posted date from ClinicalTrials.gov is now correct. The navigation links on the left side now work.

          Quote for Thought: "Second thoughts are ever wiser." – Euripides

          Joanne Dehnbostel

          unread,
          Feb 10, 2024, 3:02:06 PM2/10/24
          to Health Evidence Knowledge Accelerator (HEvKA)

           

           

           

           

           

          11 people (BA, HK, HL, JD, KS, KW, MA, PW, SM, SS, TD) participated in up to three active working group meetings. 

          The Risk of Bias Terminology Working Group found that none of the ten terms open for voting last week have received enough votes to be approved. (study eligibility criteria not prespecifiedstudy eligibility criteria not appropriate for review question, study eligibility criteria ambiguous, study eligibility criteria limits for study characteristics not appropriate,  incoherence among data, analysis, and interpretationmixed methods research bias, bias in mixed methods research design, ineffective integration of qualitative and quantitative study components, inappropriate interpretation of integration of qualitative and quantitative findings, and inadequate handling of inconsistency between qualitative and quantitative findings) remain open for voting.  

          Based on comments received, study eligibility criteria limits for study characteristics not appropriate, has an additional comment for application.

          Two new terms were then defined,  (study eligibility criteria limits for study report characteristics not appropriate, bias in search strategy) so there are currently 12 terms open for voting. 

           

          Term

          Definition

          Alternative Terms
          (if any)

          Comment for application
          (if any)

          study eligibility criteria not prespecified

          A bias in study eligibility criteria in which the criteria are not stated before the study selection process occurs.

          Failure to specify the study eligibility criteria before evaluating studies for selection can lead to a situation in which the data discovered during the study selection process influences the criteria for selection in a way that introduces systematic error.

          study eligibility criteria not appropriate for review question

          A bias in study eligibility criteria resulting from inclusion and exclusion criteria used to select studies for review that could result in a mismatch between the selected studies and the research question.

          The mismatch between the selected studies and the research question may relate to differences in the population, exposures, or outcomes studies from those of interest.

          study eligibility criteria ambiguous

          A bias in study eligibility criteria due to unclear specification.

          Eligibility criteria that are not sufficiently described to enable reproduction of study selection can introduce systematic error.

          study eligibility criteria limits for study characteristics not appropriate

          A bias in study eligibility criteria that is specific to restrictions based on study characteristics.

          Any restrictions applied on the basis of study characteristics should not introduce a bias in study eligibility criteria. Examples of such restrictions may include criteria based on study size, study design, study quality, or date of publication.

          In the ROBIS tool used for risk of bias assessment of systematic reviews, there is a question (1.4 Were all restrictions in eligibility criteria based on study characteristics appropriate?) that is different from the one which refers to whether the eligibility criteria are appropriate to the review question. Therefore, a separate term is available in SEVCO.

          study eligibility criteria limits for study report characteristics not appropriate

          A bias in study eligibility criteria that is specific to restrictions on the status, structure, language, or accessibility of the study report.

          Examples of study report characteristics include publication status (including preprints and unpublished data)., format, language, and availability of data. The ROBIS tool used for assessing risk of bias of systematic reviews includes a signaling question '1.5 Were any restrictions in eligibility criteria based on sources of information appropriate?'

          bias in search strategy

          A study selection bias specific to the strategy used to identify potentially eligible studies.

          incoherence among data, analysis, and interpretation

          A qualitative research bias in which there is any mismatch among hypothesis, data collected, data analysis, and results interpretation.

          The term mismatch applies to an inappropriate or wrong or inadequate relationship.

          mixed methods research bias

          A bias specific to the alignment of design, conduct, analysis or reporting of qualitative research and quantitative research within the same research project.

          Mixed methods research is a research approach that combines both qualitative and quantitative research methods within a single study or research project. This methodology aims to provide a more comprehensive understanding of a research problem by integrating the strengths of both qualitative and quantitative research. Examples of mixed methods research include combining surveys with in-depth interviews, using quantitative data to identify patterns and trends followed by qualitative data to explore the underlying reasons and meanings, or incorporating qualitative findings to help interpret and validate quantitative results. Overall, mixed methods research provides a more holistic understanding of a research question by acknowledging and leveraging the strengths of both qualitative and quantitative approaches.

          bias in mixed methods research design

          A mixed methods research bias in which the mixed methods approach used in a study is not appropriate for the research question and problem.

          inadequate rationale for mixed methods design

          This signaling question in the Mixed Methods Assessment Tool (MMAT) is 5.1. Is there an adequate rationale for using a mixed methods design to address the research question?

          Common mixed methods designs include:

          Convergent design The QUAL and QUAN components are usually (but not necessarily) concomitant. The purpose is to examine the same phenomenon by interpreting QUAL and QUAN results (bringing data analysis together at the interpretation stage), or by integrating QUAL and QUAN datasets (e.g., data on same cases), or by transforming data (e.g., quantization of qualitative data).

          Sequential explanatory design Results of the phase 1 - QUAN component inform the phase 2 - QUAL component. The purpose is to explain QUAN results using QUAL findings. E.g., the QUAN results guide the selection of QUAL data sources and data collection, and the QUAL findings contribute to the interpretation of QUAN results.

          Sequential exploratory design Results of the phase 1 - QUAL component inform the phase 2 - QUAN component. The purpose is to explore, develop and test an instrument (or taxonomy), or a conceptual framework (or theoretical model). E.g., the QUAL findings inform the QUAN data collection, and the QUAN results allow a statistical generalization of the QUAL findings.

          Key references: Creswell et al. (2011); Creswell and Plano Clark, (2017); O'Cathain (2010)

          Reference - Hong QN, Pluye P, Fàbregues S, Bartlett G, Boardman F, Cargo M, Dagenais P, Gagnon M-P, Griffiths F, Nicolau B, O’Cathain A, Rousseau M-C, Vedel I. Mixed Methods Appraisal Tool (MMAT), version 2018. Registration of Copyright (#1148552), Canadian Intellectual Property Office, Industry Canada. Accessed at: http://mixedmethodsappraisaltoolpublic.pbworks.com/w/file/fetch/127916259/MMAT_2018_criteria-manual_2018-08-01_ENG.pdf

          ineffective integration of qualitative and quantitative study components

          A mixed methods research bias in which the qualitative research and quantitative research components are not adequately combined.

          inappropriate interpretation of integration of qualitative and quantitative findings

          A mixed methods research bias in which the process of combining the results of the constituent analyses is flawed.

          This criterion is related to meta-inference, which is defined as the overall interpretations derived from integrating qualitative and quantitative findings (Teddlie and Tashakkori, 2009). Meta-inference occurs during the interpretation of the findings from the integration of the qualitative and quantitative components, and shows the added value of conducting a mixed methods study rather than having two separate studies. (Pluye et al 2018)

          inadequate handling of inconsistency between qualitative and quantitative findings

          A mixed methods research bias in which discrepancies in the results from the qualitative and quantitative components are not adequately addressed.

          When integrating the findings from the qualitative and quantitative components, divergences and inconsistencies (also called conflicts, contradictions, discordances, discrepancies, and dissonances) can be found. It is not sufficient to only report the divergences; they need to be explained. Different strategies to address the divergences have been suggested such as reconciliation, initiation, bracketing and exclusion (Pluye et al., 2009b). (Reference - Hong QN, Pluye P, Fàbregues S, Bartlett G, Boardman F, Cargo M, Dagenais P, Gagnon M-P, Griffiths F, Nicolau B, O’Cathain A, Rousseau M-C, Vedel I. Mixed Methods Appraisal Tool (MMAT), version 2018. Registration of Copyright (#1148552), Canadian Intellectual Property Office, Industry Canada. Accessed at: http://mixedmethodsappraisaltoolpublic.pbworks.com/w/file/fetch/127916259/MMAT_2018_criteria-manual_2018-08-01_ENG.pdf)

           

          GRADE Ontology Working Group There was one term open for vote last week , Inconsistency,  which received 11 votes (6 affirmative and 5 negative) and 8 comments.  The term definition and comment for application shown below resulted from a group discussion of those comments.  A portion of the February 9, 2024 GRADE Ontology meeting was recorded and the the recording can be viewed here.

          Term

          Definition

          Alternative Terms
          (if any)

          Comment for application
          (if any)

          Inconsistency

          Variations in the findings from which the estimate of effect was derived.

          Variations may include differences across estimates from different studies or may include differences across estimates from different analyses (such as sensitivity analyses) within studies. Variations may include differences in the magnitude or direction of the findings (point estimates and confidence intervals). Inconsistency is one of the domains that can impact the rating of the certainty of evidence. Related terms for "inconsistency" include "heterogeneity", "statistical heterogeneity", and "clinical heterogeneity". Statistical heterogeneity, methodological heterogeneity, and clinical heterogeneity (differences in the population, intervention, comparison, or outcome addressed) can influence the degree of inconsistency of the findings. Any variations in findings may be considered inconsistency. The degree of inconsistency may range from being non-existent to large, and it may be explained or unexplained. In GRADE, concerns about inconsistency are based on the magnitude and explainability.

           

          The Project Management Working Group created a proposed agenda for the following week (February 12-February 16, 2024):

          Day/Time (Eastern)

          Working Group

          Agenda Items

          Monday 8-9 am

          Project Management

          FHIR changes and EBMonFHIR Implementation Guide issues

          Monday 9-10 am

          Setting the Scientific Record on FHIR

          Development of GRADEpro-to-FEvIR Converter

          Monday 10-11 am

          CQL Development (a CDS EBMonFHIR sub-WG)

          Develop CQL for StudyEligibilityCriteria: Adolescents with non-syndromic obesity

          Monday 2-3 pm

          Statistic Terminology

          SEVCO terms for measures of dispersion and calibration (4 terms open for vote)

          Tuesday 9 am-10 am

          Measuring the Rate of Scientific Knowledge Transfer

          Review changes for overall project support (SKAF Board last Tuesday of each month)

          Tuesday 2-3 pm

          StatisticsOnFHIR (a CDS EBMonFHIR sub-WG)

          Review progress on ComparativeEvidenceReport example

          Tuesday 3-4 pm

          Ontology Management

          Review objectives and priorities

          Wednesday 8-9 am

          Funding the Ecosystem Infrastructure

          Product definition for making guidelines computable

          Wednesday 9-10 am

          Communications (Awareness, Scholarly Publications)

          Publications (Study Design Terminology), Presentations (Global Evidence Summit 2), Website

          Thursday 8-9 am

          EBM Implementation Guide (a CDS EBMonFHIR sub-WG)

          Review IG Ballot feedback, Prepare EBMonFHIR IG CodeSystem for HL7 terminology

          Thursday 9-10 am

          Computable EBM Tools Development

          Review GRADEpro-to-FEvIR Converter progress

          Friday 9-10 am

          Risk of Bias Terminology

          Review SEVCO terms for qualitative research bias and study selection bias (12 terms open for vote)

          Friday 10-11 am

          GRADE Ontology

          Term development (Inconsistency - 1 term open for vote)

          Friday 12-1 pm

          Project Management

          Prepare Weekly Agenda

           

          Releases on the FEvIR Platform:

           

          The Fast Evidence Interoperability Resources (FEvIR) Platform is available for use now, but is “pre-release”.  The current version is 0.205.0 (February 9, 2024). Viewing resources is open without login.  Signing in is free and required to create content (which can then only be edited by the person who created the content).

          Release 0.205.0 (February 9, 2024) alert modals can be closed by pressing the enter key. Recently closed-out alerts during your current session can be viewed by visiting your Edit User Profile page and clicking "See current alert history" button, this is in case if an alert was accidentally closed-out before it could be read.

          The Computable Publishing®: FEvIR-to-ClinicalTrials.gov Converter version 4.11.0 (February 9, 2024) converts FEvIR Resources in FHIR JSON back to the ClinicalTrials.gov format.

          Release 4.11.0 (February 9, 2024) the alert messages for conversion no longer use the web browser alerts so they are less intrusive and easier to close.

           

          Quote for Thought: “Before you criticize someone, you should walk a mile in their shoes. That way when you criticize them, you are a mile away from them and you have their shoes.”
          —Jack Handey

          Joanne Dehnbostel

          unread,
          Feb 12, 2024, 3:23:14 AM2/12/24
          to Health Evidence Knowledge Accelerator (HEvKA), Health Evidence Knowledge Accelerator (HEvKA) Weekly Update

           

          19 people (BA, CE , GL, HK, HL, IK, JD, JJ, JO, KR, KS, KW, MA, PW, RC, RL, SM, SS, TD) from 8 countries (Belgium, Canada, Finland, Germany, Norway, Taiwan, UK, USA) participated today in up to 14 active working group meetings.

          Project Coordination Updates

          On February 5, the Project Management Working Group worked to resolve several FHIR trackers and added a request for approval of this UTG Change Proposal

           UP-427 - Add 3 terms to http://terminology.hl7.org/CodeSystem/variable-role ENVIRONMENT SETUP 

          to the agenda of the next HL7 Clinical Decision Support (CDS) Meeting. 

          On February 9, the Project Management Working Group created a proposed agenda for the following week (February 12-February 16, 2024):

          Day/Time (Eastern)

          Working Group

          Agenda Items

          Monday 8-9 am

          Project Management

          FHIR changes and EBMonFHIR Implementation Guide issues

          Monday 9-10 am

          Setting the Scientific Record on FHIR

          Development of GRADEpro-to-FEvIR Converter

          Monday 10-11 am

          CQL Development (a CDS EBMonFHIR sub-WG)

          Develop CQL for StudyEligibilityCriteria: Adolescents with non-syndromic obesity

          Monday 2-3 pm

          Statistic Terminology

          SEVCO terms for measures of dispersion and calibration (4 terms open for vote)

          Tuesday 9 am-10 am

          Measuring the Rate of Scientific Knowledge Transfer

          Review changes for overall project support (SKAF Board last Tuesday of each month)

          Tuesday 2-3 pm

          StatisticsOnFHIR (a CDS EBMonFHIR sub-WG)

          Review progress on ComparativeEvidenceReport example

          Tuesday 3-4 pm

          Ontology Management

          Review objectives and priorities

          Wednesday 8-9 am

          Funding the Ecosystem Infrastructure

          Product definition for making guidelines computable

          Wednesday 9-10 am

          Communications (Awareness, Scholarly Publications)

          Publications (Study Design Terminology), Presentations (Global Evidence Summit 2), Website

          Thursday 8-9 am

          EBM Implementation Guide (a CDS EBMonFHIR sub-WG)

          Review IG Ballot feedback, Prepare EBMonFHIR IG CodeSystem for HL7 terminology

          Thursday 9-10 am

          Computable EBM Tools Development

          Review GRADEpro-to-FEvIR Converter progress

          Friday 9-10 am

          Risk of Bias Terminology

          Review SEVCO terms for qualitative research bias and study selection bias (12 terms open for vote)

          Friday 10-11 am

          GRADE Ontology

          Term development (Inconsistency - 1 term open for vote)

          Friday 12-1 pm

          Project Management

          Prepare Weekly Agenda

          On February 5, the Setting the Scientific Record on FHIR Working Group meet with representatives from SRDR and discussed changes in the R6 version of FHIR. especially as they pertain to Group and EvidenceVariable Resource.

          On February 7, the Communications Working Group created a Scientific Knowledge Accelerator Foundation workshop proposal titled "Scientific Evidence Code System (SEVCO): a common language for communicating evidence" for Global Evidence Summit 2 and discussed creating a proposal for an additional oral presentation to introduce the Measuring the Rate of Scientific Knowledge Transfer project. 

           

          SEVCO Updates

          On February 5, the Statistic Terminology Working Group found that none of the 5 terms open for vote last week received enough votes to be approved for the code system. However, two of the terms had comments which were discussed in today’s meeting. This discussion resulted in the deprecation of the term measure of discrimination from the code system, and moving area under the curve directly under statistic in the code system hierarchy. The comment for application for calibration intercept was also modified. We will continue to discuss the definition for area under the curve next week.

          There are currently 4 Statistic terms open for voting: 

           

           

          Term

          Definition

          Alternative Terms

          Comment for application

          calibration intercept

          A measure of calibration that is the difference between the mean expected value and the mean observed value.

          calibration-in-the-large

          The calibration intercept is computed from a statistical model for calibration of binary variables (0 or 1) where the log odds of the predicted probabilities is a linear function of the empirical frequencies. The notion of calibration in the large is that the intercept is a gross assessment of whether the average prediction matches the average outcome, however, this interpretation is exquisitely sensitive to the choice of referent factors within the prediction model.

          calibration slope

          A measure of calibration computed from the rate of change in observed value per unit change in the predicted value.

          calibration-in-the small

          The calibration slope is computed from a statistical model for calibration of binary variables (0 or 1) where the log odds of the predicted probabilities is a linear function of the empirical frequencies. 1 represents a perfect calibration, 0 represents lack of correlation. Inspection of the graph allows identification of areas of under and over confidence.

          standard error of the difference between independent means

          A measure of dispersion applied to differences between means of independent groups across hypothetical repeated random samples.

          <See standard error of the difference between independent means for the comment for application which includes the formula.>

          standard error of the difference between independent proportions

          A measure of dispersion applied to differences between proportions arising from independent groups across hypothetical repeated random samples.

          <See standard error of the difference between independent proportions for the comment for application which includes the formula.>

           

          To participate you can join the Scientific Evidence Code System (SEVCO) Expert Working Group at https://fevir.net/resources/Project/27845

           

          On February 9, the Risk of Bias Terminology Working Group found that none of the ten terms open for voting last week have received enough votes to be approved. (study eligibility criteria not prespecifiedstudy eligibility criteria not appropriate for review question, study eligibility criteria ambiguous, study eligibility criteria limits for study characteristics not appropriate,  incoherence among data, analysis, and interpretationmixed methods research bias, bias in mixed methods research design, ineffective integration of qualitative and quantitative study components, inappropriate interpretation of integration of qualitative and quantitative findings, and inadequate handling of inconsistency between qualitative and quantitative findings) remain open for voting.  

          Based on comments received, study eligibility criteria limits for study characteristics not appropriate, has an additional comment for application.

          Two new terms were then defined,  (study eligibility criteria limits for study report characteristics not appropriate, bias in search strategy) so there are currently 12 terms open for voting. 

           

          Term

          Definition

          Alternative Terms
          (if any)

          Comment for application
          (if any)

          study eligibility criteria not prespecified

          A bias in study eligibility criteria in which the criteria are not stated before the study selection process occurs.

          Failure to specify the study eligibility criteria before evaluating studies for selection can lead to a situation in which the data discovered during the study selection process influences the criteria for selection in a way that introduces systematic error.

          study eligibility criteria not appropriate for review question

          A bias in study eligibility criteria resulting from inclusion and exclusion criteria used to select studies for review that could result in a mismatch between the selected studies and the research question.

          The mismatch between the selected studies and the research question may relate to differences in the population, exposures, or outcomes studies from those of interest.

          study eligibility criteria ambiguous

          A bias in study eligibility criteria due to unclear specification.

          Eligibility criteria that are not sufficiently described to enable reproduction of study selection can introduce systematic error.

          study eligibility criteria limits for study characteristics not appropriate

          A bias in study eligibility criteria that is specific to restrictions based on study characteristics.

          Any restrictions applied on the basis of study characteristics should not introduce a bias in study eligibility criteria. Examples of such restrictions may include criteria based on study size, study design, study quality, or date of publication.

          In the ROBIS tool used for risk of bias assessment of systematic reviews, there is a question (1.4 Were all restrictions in eligibility criteria based on study characteristics appropriate?) that is different from the one which refers to whether the eligibility criteria are appropriate to the review question. Therefore, a separate term is available in SEVCO.

          study eligibility criteria limits for study report characteristics not appropriate

          A bias in study eligibility criteria that is specific to restrictions on the status, structure, language, or accessibility of the study report.

          Examples of study report characteristics include publication status (including preprints and unpublished data)., format, language, and availability of data. The ROBIS tool used for assessing risk of bias of systematic reviews includes a signaling question '1.5 Were any restrictions in eligibility criteria based on sources of information appropriate?'

          bias in search strategy

          A study selection bias specific to the strategy used to identify potentially eligible studies.

          A qualitative research bias in which there is any mismatch among hypothesis, data collected, data analysis, and results interpretation.

          The term mismatch applies to an inappropriate or wrong or inadequate relationship.

          mixed methods research bias

          A bias specific to the alignment of design, conduct, analysis or reporting of qualitative research and quantitative research within the same research project.

          Mixed methods research is a research approach that combines both qualitative and quantitative research methods within a single study or research project. This methodology aims to provide a more comprehensive understanding of a research problem by integrating the strengths of both qualitative and quantitative research. Examples of mixed methods research include combining surveys with in-depth interviews, using quantitative data to identify patterns and trends followed by qualitative data to explore the underlying reasons and meanings, or incorporating qualitative findings to help interpret and validate quantitative results. Overall, mixed methods research provides a more holistic understanding of a research question by acknowledging and leveraging the strengths of both qualitative and quantitative approaches.

          bias in mixed methods research design

          A mixed methods research bias in which the mixed methods approach used in a study is not appropriate for the research question and problem.

          inadequate rationale for mixed methods design

          This signaling question in the Mixed Methods Assessment Tool (MMAT) is 5.1. Is there an adequate rationale for using a mixed methods design to address the research question?

          Common mixed methods designs include:

          Convergent design The QUAL and QUAN components are usually (but not necessarily) concomitant. The purpose is to examine the same phenomenon by interpreting QUAL and QUAN results (bringing data analysis together at the interpretation stage), or by integrating QUAL and QUAN datasets (e.g., data on same cases), or by transforming data (e.g., quantization of qualitative data).

          Sequential explanatory design Results of the phase 1 - QUAN component inform the phase 2 - QUAL component. The purpose is to explain QUAN results using QUAL findings. E.g., the QUAN results guide the selection of QUAL data sources and data collection, and the QUAL findings contribute to the interpretation of QUAN results.

          Sequential exploratory design Results of the phase 1 - QUAL component inform the phase 2 - QUAN component. The purpose is to explore, develop and test an instrument (or taxonomy), or a conceptual framework (or theoretical model). E.g., the QUAL findings inform the QUAN data collection, and the QUAN results allow a statistical generalization of the QUAL findings.

          Key references: Creswell et al. (2011); Creswell and Plano Clark, (2017); O'Cathain (2010)

          Reference - Hong QN, Pluye P, Fàbregues S, Bartlett G, Boardman F, Cargo M, Dagenais P, Gagnon M-P, Griffiths F, Nicolau B, O’Cathain A, Rousseau M-C, Vedel I. Mixed Methods Appraisal Tool (MMAT), version 2018. Registration of Copyright (#1148552), Canadian Intellectual Property Office, Industry Canada. Accessed at: http://mixedmethodsappraisaltoolpublic.pbworks.com/w/file/fetch/127916259/MMAT_2018_criteria-manual_2018-08-01_ENG.pdf

          ineffective integration of qualitative and quantitative study components

          A mixed methods research bias in which the qualitative research and quantitative research components are not adequately combined.

          inappropriate interpretation of integration of qualitative and quantitative findings

          A mixed methods research bias in which the process of combining the results of the constituent analyses is flawed.

          This criterion is related to meta-inference, which is defined as the overall interpretations derived from integrating qualitative and quantitative findings (Teddlie and Tashakkori, 2009). Meta-inference occurs during the interpretation of the findings from the integration of the qualitative and quantitative components, and shows the added value of conducting a mixed methods study rather than having two separate studies. (Pluye et al 2018)

          inadequate handling of inconsistency between qualitative and quantitative findings

          A mixed methods research bias in which discrepancies in the results from the qualitative and quantitative components are not adequately addressed.

          When integrating the findings from the qualitative and quantitative components, divergences and inconsistencies (also called conflicts, contradictions, discordances, discrepancies, and dissonances) can be found. It is not sufficient to only report the divergences; they need to be explained. Different strategies to address the divergences have been suggested such as reconciliation, initiation, bracketing and exclusion (Pluye et al., 2009b). (Reference - Hong QN, Pluye P, Fàbregues S, Bartlett G, Boardman F, Cargo M, Dagenais P, Gagnon M-P, Griffiths F, Nicolau B, O’Cathain A, Rousseau M-C, Vedel I. Mixed Methods Appraisal Tool (MMAT), version 2018. Registration of Copyright (#1148552), Canadian Intellectual Property Office, Industry Canada. Accessed at: http://mixedmethodsappraisaltoolpublic.pbworks.com/w/file/fetch/127916259/MMAT_2018_criteria-manual_2018-08-01_ENG.pdf)

          FEvIR Platform (and Tools) Development Updates:

           

          On February 8, the Computable EBM Tools Development Working Group submitted the completed GINTech workshop proposal titled "What is Needed to Make Guidelines Computable?: A consensus-development exercise" for Global Evidence Summit 2.

          The group also reviewed progress on development of the GRADEpro-to-FEvIR Converter Tool.

          On February 5, the CQL Development Working Group discussed how Clinical Quality Language (CQL) could be used to describe eligibility criteria for a group of adolescents with obesity and created the following Group Resource StudyEligibilityCriteria: Obese patients 12-17 years old  which contains a completed CQL expression. The group went on to define a group who had obesity including any of a list of conditions ObesityConditions ValueSet except for generalized or morbid obesity, and will continue to work to add CQL to  StudyEligibilityCriteria: Adolescents with non-syndromic obesity.

          HL7 Standards Development Updates:

          On February 8, the EBM Implementation Guide Working Group (a CDS EBMonFHIR sub-WG) reviewed the EBMonFHIR Implementation Guide Ballot feedback and made improvements to the EBMonFHIR IG including the application of :

          https://jira.hl7.org/browse/FHIR-43457

          https://build.fhir.org/ig/HL7/ebm/StructureDefinition-comparative-evidence-report.html – section:researchStudy now has code for research-study, and entry Reference only ResearchStudy or Citation. 

          Headings and italics were added to improve the implementation guide as described here.  https://jira.hhttps://jira.hl7.org/browse/FHIR-44044 More detail may be added during the Monday Project Management meeting.

          On February 6, the StatisticsOnFHIR WG (a CDS EBMonFHIR sub-WG) continued to work on a ResearchStudy example https://fevir.net/resources/composition/201561.

           

          Knowledge Ecosystem Liaison/Coordination Updates:

          On February 7, the Funding the Ecosystem Infrastructure Working Group created a GINTech workshop proposal titled "What is Needed to Make Guidelines Computable?: A consensus-development exercise" for Global Evidence Summit 2

          On February 6, the Ontology Management WG continued to work on a ResearchStudy example https://fevir.net/resources/composition/201561.

          On February 9, the GRADE Ontology Working Group There was one term open for vote last week , Inconsistency,  which received 11 votes (6 affirmative and 5 negative) and 8 comments.  The term definition and comment for application shown below resulted from a group discussion of those comments.  A portion of the February 9, 2024 GRADE Ontology meeting was recorded and the the recording can be viewed here.

          Term

          Definition

          Alternative Terms
          (if any)

          Comment for application
          (if any)

          Inconsistency

          Variations in the findings from which the estimate of effect was derived.

          Variations may include differences across estimates from different studies or may include differences across estimates from different analyses (such as sensitivity analyses) within studies. Variations may include differences in the magnitude or direction of the findings (point estimates and confidence intervals). Inconsistency is one of the domains that can impact the rating of the certainty of evidence. Related terms for "inconsistency" include "heterogeneity", "statistical heterogeneity", and "clinical heterogeneity". Statistical heterogeneity, methodological heterogeneity, and clinical heterogeneity (differences in the population, intervention, comparison, or outcome addressed) can influence the degree of inconsistency of the findings. Any variations in findings may be considered inconsistency. The degree of inconsistency may range from being non-existent to large, and it may be explained or unexplained. In GRADE, concerns about inconsistency are based on the magnitude and explainability.

           

          Research Development Updates:

          On February 6, the Measuring the Rate of Scientific Knowledge Transfer WG reviewed software updates on the FEvIR Platform that will support the next phase of the project and used the new software to load RIS files of articles that will be used for data entry. 

           

          Releases on the FEvIR Platform:

          The Fast Evidence Interoperability Resources (FEvIR) Platform is available for use now, but is “pre-release”.  The current version is 0.205.0 (February 9, 2024). Viewing resources is open without login.  Signing in is free and required to create content (which can then only be edited by the person who created the content).

          Release 0.202.0 (February 5, 2024), the bulk resource uploader limit increased from 100 to 500.

          Release 0.203.0 (February 6, 2024) the bulk resource importer now supports up to 1,000 resources, up from 500. Also toast pop-ups now appear above modals. The look of the X close button now has a slightly improved appearance for modals and elsewhere.

          Release 0.204.0 (February 7, 2024) certain alert modals can now be closed by clicking out or by pressing the ESC key, including the RADAR tool modals. On the create new resource page, when a resource is submitted without a title, a modal will pop up asking for a title instead of a web browser alert. 

          Release 0.205.0 (February 9, 2024) alert modals can be closed by pressing the enter key. Recently closed-out alerts during your current session can be viewed by visiting your Edit User Profile page and clicking "See current alert history" button, this is in case if an alert was accidentally closed-out before it could be read.

          The Computable Publishing®: FEvIR-to-ClinicalTrials.gov Converter version 4.11.0 (February 9, 2024) converts FEvIR Resources in FHIR JSON back to the ClinicalTrials.gov format.

          Release 4.11.0 (February 9, 2024) the alert messages for conversion no longer use the web browser alerts so they are less intrusive and easier to close.

          The FEvIR®: RAte of Dissemination Assessment Research (RADAR) Tool Version 0.7.1 (February 8, 2024) facilitates research conduct and project management for an investigation to measure the rate of scientific knowledge transfer

          Release 0.5.0 (February 5, 2024)added RIS Citation importing functionality

          Release 0.6.0 (February 6, 2024) now autofills results in registry (yes or no) and the Date of Results Posted fields. If a user inputs an NCT ID in the study record where it loads the data from ClinicalTrials.gov. The import citation via PMID text field now removes tab characters that could be accidentally copied from a table. An additional Save button was added to save and close the modal, and the original Save button saves and keeps the modal open so the user can continue their work. The invite modal now lets the user know that they aren't logged in and displays a sign in button.

          Release 0.7.0 (February 7, 2024) all alert and confirmation prompts now use a modal instead of the in-browser alerts. All modals can be closed by clicking out or by pressing the ESC key as long as no changes have been made, or the X button can be used to close out and the user will be asked for confirmation to discard changes.

          Release 0.7.1 (February 8, 2024) when putting an NCT ID as the study record, the loaded Results First Posted date from ClinicalTrials.gov is now correct. The navigation links on the left side now work.

          The Computable Publishing®: RIS-to-FEvIR Converter version 0.13.0 (February 5, 2024) converts data with an RIS format into FEvIR Resources in FHIR Citation JSON.

          Release 0.13.0 (February 5, 2024) improved the user interface so it hides UI components from a previous step so it's more intuitive to use and the RIS text box doesn't overly increase the page height.

           

          Quotes for Thought:

          "How can you know what you are capable of if you don't embrace the unknown?" – Esmeralda Santiago 

          "Vitality shows in not only the ability to persist but the ability to start over."  --F. Scott Fitzgerald

          "May the flowers remind us why the rain was so necessary." – Xan Oku

          "Second thoughts are ever wiser." – Euripides

          “Before you criticize someone, you should walk a mile in their shoes. That way when you criticize them, you are a mile away from them and you have their shoes.”
          —Jack Handey

           

          To get involved or stay informed: HEvKA Project Page on FEvIR Platform, HEvKA Project Page on HL7 Confluence, or join any of the groups that are now meeting in the following weekly schedule:

          Joanne Dehnbostel

          unread,
          Feb 13, 2024, 9:02:01 AM2/13/24
          to Health Evidence Knowledge Accelerator (HEvKA)

           

           

          6 people (BA, HL, JD, KS, KW, MA) participated in up to four active working group meetings. 

          Today the Project Management Working Group made progress with FHIR trackers.

          The Setting the Scientific Record on FHIR saw a demonstration of the new GRADEpro-to-FEvIR Converter tool, still in development on the FEvIR Platform. 

          The CQL Development (a CDS EBMonFHIR sub-WG) discussed potential real-world examples to be converted to CQL including: 

           

          This study -- https://www.nejm.org/doi/full/10.1056/NEJMoa2204253 Clinical Trials.gov entry: http://clinicaltrials.gov/show/NCT02210650 ,  protocol: https://www.nejm.org/doi/suppl/10.1056/NEJMoa2204253/suppl_file/nejmoa2204253_protocol.pdf 

          And https://www.crohnscolitisfoundation.org/research/current-research-initiatives/ibd-plexus/research-initiatives

          The Statistic Terminology Working Group found that all four of the terms which had been open for vote received enough votes to be approved for the code system, however, a discussion which ensued after reading the comments on those terms resulted in a change in the term definition for  calibration slope, so that term is open for vote again this week. There is currently one term open for vote. 

          Term

          Definition

          Alternative Terms
          (if any)

          Comment for application
          (if any)

          calibration slope

          A measure of calibration that is the rate of change in the appropriately transformed value per unit change of the correspondingly transformed predicted value.

          calibration-in-the-small

          For calibration of binary outcome variables (0 or 1), the calibration slope is computed from a statistical model where the log odds of the predicted probabilities is a linear function of the empirical frequencies (logistical regression). The transformation is log odds (logit, the link function for a generalized linear model for the expected value of the outcome).

          For calibration of count outcome variables (0, 1, 2, 3, ...), the calibration slope may be computed from a statistical model where the log of the predicted mean counts is a linear function of the empirical frequencies determined by unique combinations of the covariates. The transformation is log, the link function, of the counts.

          There are other types of outcome variables for which the calibration slope may be obtained.

          Slopes further away from 1.0 indicate, at upper and lower values, over- or under-confidence in the prediction.

          All measures of dispersion and all but this one measure of association have now been approved so the group looked ahead to the next set of terms. The term relative importance, which had been listed under percentage, was removed from the code system. The next set of terms to be defined will be the terms under area under the curve. Terms added today are: area under the longitudinal trajectory curve, area under the precision recall curve and partial area under the ROC curve. We will start to define these new terms next week. 

           

          Quote for thought "Impossible standards just make life difficult" – Fortune Cookie

          Joanne Dehnbostel

          unread,
          Feb 14, 2024, 7:58:59 AM2/14/24
          to Health Evidence Knowledge Accelerator (HEvKA)

           

           

          7 people (BA, CE , HL, JD, KS, KW, RC) participated in up to three active working group meetings. 

          The Measuring the Rate of Scientific Knowledge Transfer Working Group discussed the latest developments in the  FEvIR®: RAte of Dissemination Assessment Research (RADAR) Tool which will support our next research project. The group then added instructions for investigators to the software. An abstract of the anticipated research project will be submitted to the upcoming Global Evidence Summit 2, https://www.globalevidencesummit.org/. Abstracts are due next week. The group hopes to refine and submit the finished abstract in next week's meeting.  

          The StatisticsOnFHIR (a CDS EBMonFHIR sub-WG) reviewed progress on a ComparativeEvidenceReport for a real world evidence example https://fevir.net/resources/Project/195718 , this week's emphasis was how to represent the variables and evidence contained in a CONSORT diagram using profiles from the Evidence Based Medicine (EBMonFHIR) Implementation Guide. Discussion of this topic resulted in a request to change the naming of two profiles in the implementation guide for clarity and consistency https://jira.hl7.org/browse/FHIR-44740.

          The Ontology Management Working Group discussed the following completed (or nearly completed) FHIR trackers which will support the work of multiple HL7 Working Groups. The group also discussed the upcoming HL7 Terminology freeze (on February 25).

          --- EBMonFHIR ---
          Successfully Built and just needs to be merged IN FHIR and need to get approval for merge in FHIR-extensions repos https://jira.hl7.org/browse/FHIR-42885

          --- BR&R ---
          This is almost done, we just need to coordinate the Terminology THO changes: https://jira.hl7.org/browse/FHIR-43561

          --- STRUCTURED DOCUMENTS ---
          Completed and merged
          https://jira.hl7.org/browse/FHIR-34407

          The aforementioned improvements to the FHIR specification were mentioned in the HL7 Biomedical Research and Regulation Working Group (BR&R) meeting today. The BR&R group also discussed coordination of the representation of participant flow measures with our HEvKA StatisticsOnFHIR (a CDS EBMonFHIR sub-WG) group.

          Quote for thought: "It is not the mountain we conquer but ourselves."- Sir Edmund Hillary

          Joanne Dehnbostel

          unread,
          Feb 14, 2024, 10:21:11 PM2/14/24
          to Health Evidence Knowledge Accelerator (HEvKA)

           

           

          6 people (BA, CE , JD, JO, KS, MA) participated in up to two active working group meetings. 

          The Funding the Ecosystem Infrastructure Working Group created the following statements as a product definition for making guidelines computable:

          For (target customer) who (statement of the need or opportunity), the (product name) is a (product category) that (statement of key benefit – that is, compelling reason to buy)

          For (guideline developer) who (wants to share and store their content in computable form), the (Computable Publishing: Guideline Authoring Tool) is a (web-based user interface to enter data for automatic conversion to computable form) that (enables creation of standard computable content without having to learn technical specifications, i.e. enables creating computable content without any software engineering personnel). Creating computable content for guidelines is desired to make updating guidelines more efficient by limiting changes to only the concepts that need to change and coordinating the changes and packaging within linked content, make sharing guidelines easier to find by providing metadata that is used when searchers limit their searches, and make shared guidelines easier to use by providing the guideline content in a form that is interpretable by software tools for guideline development (adaptation) and guideline implementation.

          For (a clinical trial investigator) who (needs to report the trial protocol in computable form), the (Computable Publishing: Research Protocol Authoring Tool) is a (web-based user interface to enter data for automatic conversion to computable form) that (enables creation of standard computable content without having to learn technical specifications, i.e. enables creating computable content without any software engineering personnel). Creating computable content for research protocols is desired to make updating protocols more efficient by limiting changes to only the concepts that need to change and coordinating the changes and packaging within linked content, make reporting protocols easier by providing the report in forms meeting technical specifications, and make shared protocols easier to use by providing the protocol content in a form that is interpretable by software tools for trial management.

          The Communications Working Group reviewed the HEvKA workshop submissions for the upcoming Global Evidence Summit 2 https://www.globalevidencesummit.org/ titled "Scientific Evidence Code System (SEVCO): a common language for communicating evidence" and 'What is Needed to Make Guidelines Computable?: A consensus-development exercise". The group then worked on an abstract submission for the same Global Evidence Summit titled "How to measure the rate of knowledge transfer from clinical trials to clinical practice".

          Quote for thought "Love doesn't make the world go 'round. Love is what makes the ride worthwhile." – Franklin P. Jones

          Joanne Dehnbostel

          unread,
          Feb 16, 2024, 7:58:27 AM2/16/24
          to Health Evidence Knowledge Accelerator (HEvKA)

           

           

           

          5 people (IK, JD, KS, LL, MH) participated in up to two active working group meetings. 

          The EBM Implementation Guide Working Group (a CDS EBMonFHIR sub-WG) discussed instructions and guidance for how to use the Group Resource to describe eligibility criteria. Expressing Eligibility Criteria using the Group (R6) Resource - Reference Page.

          The Computable EBM Tools Development Working Group reviewed progress on the GRADEpro-to-FEvIR Converter tool on the FEvIR platform. The group then discussed potential improvements to the Summary of Findings Table user interface https://fevir.net/resources/Composition/175771 , including whether to add absolute or relative estimates or both,  and reviewed other recent developments on the FEvIR platform.

          The HL7 Biomedical Research and Regulation (BR&R) Working Group also discussed the Expressing Eligibility Criteria using the Group (R6) Resource - Reference Page and added examples to explain how to add expressions and when an extension is needed.

          The HL7 Structured Documents Working Group discussed FHIR trackers that we have been working to implement including https://jira.hl7.org/browse/FHIR-34407 and https://jira.hl7.org/browse/FHIR-38201

          Quote for thought: “Turn your wounds into wisdom.” — Oprah Winfrey

          Joanne Dehnbostel

          unread,
          Feb 17, 2024, 6:54:51 PM2/17/24
          to Health Evidence Knowledge Accelerator (HEvKA)

           

           

          13 people (BA, CM, HK, HL, JB, JD, KS, KW, MA, PW, SM, SS, TD) participated in up to three active working group meetings. 

          Today the Risk of Bias Terminology Working Group found that of the 12 terms open for vote last week, 2 terms (study eligibility criteria not prespecified and study eligibility criteria ambiguous) received enough all positive votes to be accepted into the code system. Two terms ( study eligibility criteria not appropriate for review question and incoherence among qualitative data, analysis, and interpretation) received negative comments. These terms were edited based on those comments and the vote was re-started. If you have previously voted on these terms you will need to vote again on the new definitions. The remainder of the terms (study eligibility criteria ambiguousstudy eligibility criteria limits for study characteristics not appropriatestudy eligibility criteria limits for study report characteristics not appropriatemixed methods research biasbias in mixed methods research designineffective integration of qualitative and quantitative study componentsinappropriate interpretation of integration of qualitative and quantitative findingsinadequate handling of inconsistency between qualitative and quantitative findings) are still open for vote with no changes. If you have already voted for these terms and you are happy with your previous vote, there is no reason to vote again. Your vote can be changed at any time before approval, using My Ballot. 

          The group also defined three new terms (database search sources inadequatenon-database search sources inadequate, and search strategy not sensitive) that are open for vote. There are currently 13 terms open for vote including the 8 that are unchanged from last week.

           

          Code and Voting Results From Last Week

          Term

          Definition

          Alternative Terms
          (if any)

          Comment for application
          (if any)

          SEVCO:00273

          Passed

          study eligibility criteria not prespecified

          A bias in study eligibility criteria in which the criteria are not stated before the study selection process occurs.

          Failure to specify the study eligibility criteria before evaluating studies for selection can lead to a situation in which the data discovered during the study selection process influences the criteria for selection in a way that introduces systematic error.

          SEVCO:00274

          Re-Opened

          A bias in study eligibility criteria due to a mismatch between the inclusion and exclusion criteria and the research question.

          The mismatch between the inclusion and exclusion criteria and the research question may relate to differences in the population, exposures, or outcomes studies from those of interest.

          SEVCO:00275

          Passed

          study eligibility criteria ambiguous

          A bias in study eligibility criteria due to unclear specification.

          Eligibility criteria that are not sufficiently described to enable reproduction of study selection can introduce systematic error.

          SEVCO:00276

          Still Open 

          study eligibility criteria limits for study characteristics not appropriate

          A bias in study eligibility criteria that is specific to restrictions based on study characteristics.

          Any restrictions applied on the basis of study characteristics should not introduce a bias in study eligibility criteria. Examples of such restrictions may include criteria based on study size, study design, study quality, or date of publication.

          In the ROBIS tool used for risk of bias assessment of systematic reviews, there is a question (1.4 Were all restrictions in eligibility criteria based on study characteristics appropriate?) that is different from the one which refers to whether the eligibility criteria are appropriate to the review question. Therefore, a separate term is available in SEVCO.

          SEVCO:00277

          Still Open 

          study eligibility criteria limits for study report characteristics not appropriate

          A bias in study eligibility criteria that is specific to restrictions on the status, structure, language, or accessibility of the study report.

          Examples of study report characteristics include publication status (including preprints and unpublished data)., format, language, and availability of data. The ROBIS tool used for assessing risk of bias of systematic reviews includes a signaling question '1.5 Were any restrictions in eligibility criteria based on sources of information appropriate?'

          SEVCO:00395

          Still Open 

          bias in search strategy

          A study selection bias specific to the strategy used to identify potentially eligible studies.

          A qualitative research bias in which there is any mismatch among hypothesis, data collected, data analysis, and results interpretation.

          The term mismatch applies to an inappropriate or wrong or inadequate relationship.

          SEVCO:00029

          Still Open

          mixed methods research bias

          A bias specific to the alignment of design, conduct, analysis or reporting of qualitative research and quantitative research within the same research project.

          Mixed methods research is a research approach that combines both qualitative and quantitative research methods within a single study or research project. This methodology aims to provide a more comprehensive understanding of a research problem by integrating the strengths of both qualitative and quantitative research. Examples of mixed methods research include combining surveys with in-depth interviews, using quantitative data to identify patterns and trends followed by qualitative data to explore the underlying reasons and meanings, or incorporating qualitative findings to help interpret and validate quantitative results. Overall, mixed methods research provides a more holistic understanding of a research question by acknowledging and leveraging the strengths of both qualitative and quantitative approaches.

          SEVCO:00361

          Still Open

          bias in mixed methods research design

          A mixed methods research bias in which the mixed methods approach used in a study is not appropriate for the research question and problem.

          • inadequate rationale for mixed methods design

          This signaling question in the Mixed Methods Assessment Tool (MMAT) is 5.1. Is there an adequate rationale for using a mixed methods design to address the research question?

          Common mixed methods designs include:

          Convergent design The QUAL and QUAN components are usually (but not necessarily) concomitant. The purpose is to examine the same phenomenon by interpreting QUAL and QUAN results (bringing data analysis together at the interpretation stage), or by integrating QUAL and QUAN datasets (e.g., data on same cases), or by transforming data (e.g., quantization of qualitative data).

          Sequential explanatory design Results of the phase 1 - QUAN component inform the phase 2 - QUAL component. The purpose is to explain QUAN results using QUAL findings. E.g., the QUAN results guide the selection of QUAL data sources and data collection, and the QUAL findings contribute to the interpretation of QUAN results.

          Sequential exploratory design Results of the phase 1 - QUAL component inform the phase 2 - QUAN component. The purpose is to explore, develop and test an instrument (or taxonomy), or a conceptual framework (or theoretical model). E.g., the QUAL findings inform the QUAN data collection, and the QUAN results allow a statistical generalization of the QUAL findings.

          Key references: Creswell et al. (2011); Creswell and Plano Clark, (2017); O'Cathain (2010)

          Reference - Hong QN, Pluye P, Fàbregues S, Bartlett G, Boardman F, Cargo M, Dagenais P, Gagnon M-P, Griffiths F, Nicolau B, O’Cathain A, Rousseau M-C, Vedel I. Mixed Methods Appraisal Tool (MMAT), version 2018. Registration of Copyright (#1148552), Canadian Intellectual Property Office, Industry Canada. Accessed at: http://mixedmethodsappraisaltoolpublic.pbworks.com/w/file/fetch/127916259/MMAT_2018_criteria-manual_2018-08-01_ENG.pdf

          SEVCO:00362

          Still Open

          ineffective integration of qualitative and quantitative study components

          A mixed methods research bias in which the qualitative research and quantitative research components are not adequately combined.

          SEVCO:00363

          Still Open

          inappropriate interpretation of integration of qualitative and quantitative findings

          A mixed methods research bias in which the process of combining the results of the constituent analyses is flawed.

          This criterion is related to meta-inference, which is defined as the overall interpretations derived from integrating qualitative and quantitative findings (Teddlie and Tashakkori, 2009). Meta-inference occurs during the interpretation of the findings from the integration of the qualitative and quantitative components, and shows the added value of conducting a mixed methods study rather than having two separate studies. (Pluye et al 2018)

          SEVCO:00364

          Still Open

          inadequate handling of inconsistency between qualitative and quantitative findings

          A mixed methods research bias in which discrepancies in the results from the qualitative and quantitative components are not adequately addressed.

          When integrating the findings from the qualitative and quantitative components, divergences and inconsistencies (also called conflicts, contradictions, discordances, discrepancies, and dissonances) can be found. It is not sufficient to only report the divergences; they need to be explained. Different strategies to address the divergences have been suggested such as reconciliation, initiation, bracketing and exclusion (Pluye et al., 2009b). (Reference - Hong QN, Pluye P, Fàbregues S, Bartlett G, Boardman F, Cargo M, Dagenais P, Gagnon M-P, Griffiths F, Nicolau B, O’Cathain A, Rousseau M-C, Vedel I. Mixed Methods Appraisal Tool (MMAT), version 2018. Registration of Copyright (#1148552), Canadian Intellectual Property Office, Industry Canada. Accessed at: http://mixedmethodsappraisaltoolpublic.pbworks.com/w/file/fetch/127916259/MMAT_2018_criteria-manual_2018-08-01_ENG.pdf)

          SEVCO:00263

          Newly Open

          database search sources inadequate

          A bias in search strategy in which the electronic sources are not sufficient to find the studies available in electronic sources.

          The set of databases (electronic sources) expected to include the studies of interest will vary with the review topic.

          SEVCO:00264

          Newly Open

          non-database search sources inadequate

          A bias in search strategy in which the sources other than electronic database sources are not sufficient to find the studies available.

          The set of sources other than databases (electronic sources) expected to include the studies of interest will vary with the review topic.

          SEVCO:00265

          Newly Open

          search strategy not sensitive

          A bias in search strategy in which the search terms and combinations of search terms are not sufficient to find the studies available.

          The search terms and combinations of search terms expected to include the studies of interest will vary with the review topic.

           

          The Grade Ontology Working Group found that the one term open for vote, Inconsistency,  did not pass (Vote 5-5). The group discussed the comments received on the ballot.

          The results of this discussion are as follows: 

          The GRADE Ontology Working Group discussed and determined that the concepts of 'systematic error' and 'random error' contributing to inconsistency are addressed by the Comment for application phrase including statistical heterogeneity and methodological heterogeneity. The term was redefined,  and has been re-opened for vote. If you have already voted on the term inconsistency please vote again as the definition has been changed. 

          The group then discussed the term Indirectness. This term is now also newly open for vote.

          The meeting was recorded and can be viewed here

          Term

          Definition

          Alternative Terms
          (if any)

          Comment for application
          (if any)

          Variations in the findings from which the estimate of effect was derived.

          Variations may include differences across estimates from different studies or may include differences across estimates from different analyses (such as sensitivity analyses) within studies. Variations may include differences in the magnitude or direction of the findings (point estimates and confidence intervals). Inconsistency is one of the domains that can impact the rating of the certainty of evidence. Related terms for "inconsistency" include "heterogeneity", "statistical heterogeneity", and "clinical heterogeneity". Statistical heterogeneity, methodological heterogeneity, and clinical heterogeneity (differences in the population, intervention, comparison, or outcome addressed) can influence the degree of inconsistency of the findings. Any variations in findings may be considered inconsistency. The degree of inconsistency may range from being almost non-existent to large. Inconsistency may be explained or unexplained. The degree and explainability of inconsistency influences the rating of certainty of evidence, but not the definition of inconsistency itself.

          Indirectness

          Differences between any of the populations, the exposures or interventions, the comparators, or the outcomes measured in the studies or analyses that were considered to estimate the effect and those under consideration in a question of interest.

          The question of interest may vary with context, such as the key considerations for a guideline or systematic review. Indirectness is one of the domains that can impact the rating of the certainty of evidence. Related terms used by others for "directness" include "relevance", "external validity", and "applicability". Any differences may be considered indirectness. The degree of indirectness may range from being almost non-existent to large. Indirectness may be important or unimportant. The degree and importance of indirectness influences the rating of certainty of evidence, but not the definition of indirectness itself. "Indirectness in comparisons of exposures", in which A vs. C comparison is derived from an A vs. B study and a B vs. C study, is not the intended use of Indirectness.

          During the meeting the GRADE Ontology Working Group decided to prepare an abstract for submission to Global Evidence Summit 2024, this will be distributed to the group for comments when ready. The deadline for submission is Wednesday, February 21, 2024.

          The Project Management Working Group prepared a proposed agenda for next week:

          Day/Time (Eastern)

          Working Group

          Agenda Items

          Monday 8-9 am

          Project Management

          FHIR changes and EBMonFHIR Implementation Guide issues

          Monday 9-10 am

          Setting the Scientific Record on FHIR

          Development of GRADEpro-to-FEvIR Converter

          Monday 10-11 am

          CQL Development (a CDS EBMonFHIR sub-WG)

          Develop CQL for StudyEligibilityCriteria: Adolescents with non-syndromic obesity

          Monday 2-3 pm

          Statistic Terminology

          SEVCO terms for measures of calibration (1 term open for vote)

          Tuesday 9 am-10 am

          Measuring the Rate of Scientific Knowledge Transfer

          Review Global Evidence Summit abstract submission (SKAF Board last Tuesday of each month)

          Tuesday 2-3 pm

          StatisticsOnFHIR (a CDS EBMonFHIR sub-WG)

          Review progress on ComparativeEvidenceReport example

          Tuesday 3-4 pm

          Ontology Management

          Review objectives and priorities, including Global Evidence Summit abstract submission for GRADE Ontology

          Wednesday 8-9 am

          Funding the Ecosystem Infrastructure

          Review product definition for making guidelines computable

          Wednesday 9-10 am

          Communications (Awareness, Scholarly Publications)

          Publications (Study Design Terminology), Presentations (Global Evidence Summit 2, AMIA), Website

          Thursday 8-9 am

          EBM Implementation Guide (a CDS EBMonFHIR sub-WG)

          Review IG Ballot feedback, Prepare EBMonFHIR IG CodeSystem for HL7 terminology

          Thursday 9-10 am

          Computable EBM Tools Development

          Review GRADEpro-to-FEvIR Converter progress

          Friday 9-10 am

          Risk of Bias Terminology

          Review SEVCO terms for qualitative research bias and study selection bias (13 terms open for vote)

          Friday 10-11 am

          GRADE Ontology

          Term development (Inconsistency, Indirectness - 2 terms open for vote)

          Friday 12-1 pm

          Project Management

          Prepare Weekly Agenda

           

          Releases on the FEvIR Platform:

          Computable Publishing®: Recommendation Authoring Tool version 0.12.1 (February 16, 2024) creates a Composition Resource with a Recommendation Profile and the associated Resources for a structured representation of a recommendation.

          Release 0.12.1 (February 16, 2024)  doesn't crash if the resource doesn't have a section element.

          Computable Publishing®: ClinicalTrials.gov-to-FEvIR Converter version 4.11.1 (February 16, 2024)  converts ClinicalTrials.gov JSON to FEvIR Resources in FHIR JSON.

          Release 4.11.1 (February 16, 2024) changed the wording on the alert message.

          The FEvIR®: RAte of Dissemination Assessment Research (RADAR) Tool version 0.8.0 (February 16, 2024) facilitates research conduct and project management for an investigation to measure the rate of scientific knowledge transfer
                        
          Release 0.8.0 (February 16, 2024) now has instructions for how rating an index article. Changed the left navigation to now have "Instructions" instead of "Links."

          Quote for Thought "Be like a postage stamp. Stick to a thing till you get there." — Josh Billings

          Joanne Dehnbostel

          unread,
          Feb 19, 2024, 9:36:03 AM2/19/24
          to Health Evidence Knowledge Accelerator (HEvKA) Weekly Update, Health Evidence Knowledge Accelerator (HEvKA)

           

           

          Project Coordination Updates

          19 people (BA, CE , CM, HK, HL, IK, JB, JD, JO, KS, KW, LL, MA, MH, PW, RC, SM, SS, TD) from 8 countries (Belgium, Brazil, Canada, Chile/Spain, Finland, Norway, UK, USA) participated this week in up to 14 active working group meetings.

          On February 12, the Project Management Working Group made progress with FHIR trackers.

          On February 16, the Project Management Working Group prepared a proposed agenda for next week:

          Day/Time (Eastern)

          Working Group

          Agenda Items

          Monday 8-9 am

          Project Management

          FHIR changes and EBMonFHIR Implementation Guide issues

          Monday 9-10 am

          Setting the Scientific Record on FHIR

          Development of GRADEpro-to-FEvIR Converter

          Monday 10-11 am

          CQL Development (a CDS EBMonFHIR sub-WG)

          Develop CQL for StudyEligibilityCriteria: Adolescents with non-syndromic obesity

          Monday 2-3 pm

          Statistic Terminology

          SEVCO terms for measures of calibration (1 term open for vote)

          Tuesday 9 am-10 am

          Measuring the Rate of Scientific Knowledge Transfer

          Review Global Evidence Summit abstract submission (SKAF Board last Tuesday of each month)

          Tuesday 2-3 pm

          StatisticsOnFHIR (a CDS EBMonFHIR sub-WG)

          Review progress on ComparativeEvidenceReport example

          Tuesday 3-4 pm

          Ontology Management

          Review objectives and priorities, including Global Evidence Summit abstract submission for GRADE Ontology

          Wednesday 8-9 am

          Funding the Ecosystem Infrastructure

          Review product definition for making guidelines computable

          Wednesday 9-10 am

          Communications (Awareness, Scholarly Publications)

          Publications (Study Design Terminology), Presentations (Global Evidence Summit 2, AMIA), Website

          Thursday 8-9 am

          EBM Implementation Guide (a CDS EBMonFHIR sub-WG)

          Review IG Ballot feedback, Prepare EBMonFHIR IG CodeSystem for HL7 terminology

          Thursday 9-10 am

          Computable EBM Tools Development

          Review GRADEpro-to-FEvIR Converter progress

          Friday 9-10 am

          Risk of Bias Terminology

          Review SEVCO terms for qualitative research bias and study selection bias (13 terms open for vote)

          Friday 10-11 am

          GRADE Ontology

          Term development (Inconsistency, Indirectness - 2 terms open for vote)

          Friday 12-1 pm

          Project Management

          Prepare Weekly Agenda

          On February 12, the Setting the Scientific Record on FHIR Working Group saw a demonstration of the new GRADEpro-to-FEvIR Converter tool, still in development on the FEvIR Platform.

          On February 14, the Communications Working Group reviewed the HEvKA workshop submissions for the upcoming Global Evidence Summit 2 https://www.globalevidencesummit.org/ titled "Scientific Evidence Code System (SEVCO): a common language for communicating evidence" and 'What is Needed to Make Guidelines Computable?: A consensus-development exercise". The group then worked on an abstract submission for the same Global Evidence Summit titled "How to measure the rate of knowledge transfer from clinical trials to clinical practice".

           

          SEVCO Updates:

           

          On February 12, the Statistic Terminology Working Group found that all four of the terms which had been open for vote received enough votes to be approved for the code system, however, a discussion which ensued after reading the comments on those terms resulted in a change in the term definition for  calibration slope, so that term was re-opened for vote again this week. There is currently one term open for vote. 

          Term

          Definition

          Alternative Terms
          (if any)

          Comment for application
          (if any)

          calibration slope

           

          Re-Opened

          A measure of calibration that is the rate of change in the appropriately transformed value per unit change of the correspondingly transformed predicted value.

          calibration-in-the-small

          For calibration of binary outcome variables (0 or 1), the calibration slope is computed from a statistical model where the log odds of the predicted probabilities is a linear function of the empirical frequencies (logistical regression). The transformation is log odds (logit, the link function for a generalized linear model for the expected value of the outcome).

          For calibration of count outcome variables (0, 1, 2, 3, ...), the calibration slope may be computed from a statistical model where the log of the predicted mean counts is a linear function of the empirical frequencies determined by unique combinations of the covariates. The transformation is log, the link function, of the counts.

          There are other types of outcome variables for which the calibration slope may be obtained.

          Slopes further away from 1.0 indicate, at upper and lower values, over- or under-confidence in the prediction.

          All measures of dispersion and all but this one measure of association have now been approved so the group looked ahead to the next set of terms. The term relative importance, which had been listed under percentage, was removed from the code system. The next set of terms to be defined will be the terms under area under the curve. Terms added today are: area under the longitudinal trajectory curve, area under the precision recall curve and partial area under the ROC curve. We will start to define these new terms next week.

          On February 16, the Risk of Bias Terminology Working Group found that of the 12 terms open for vote last week, 2 terms (study eligibility criteria not prespecified and study eligibility criteria ambiguous) received enough all positive votes to be accepted into the code system. Two terms ( study eligibility criteria not appropriate for review question and incoherence among qualitative data, analysis, and interpretation) received negative comments. These terms were edited based on those comments and the vote was re-started. If you have previously voted on these terms you will need to vote again on the new definitions. The remainder of the terms (study eligibility criteria ambiguousstudy eligibility criteria limits for study characteristics not appropriatestudy eligibility criteria limits for study report characteristics not appropriatemixed methods research biasbias in mixed methods research designineffective integration of qualitative and quantitative study componentsinappropriate interpretation of integration of qualitative and quantitative findingsinadequate handling of inconsistency between qualitative and quantitative findings) are still open for vote with no changes. If you have already voted for these terms and you are happy with your previous vote, there is no reason to vote again. Your vote can be changed at any time before approval, using My Ballot. 

          The group also defined three new terms (database search sources inadequatenon-database search sources inadequate, and search strategy not sensitive) that are open for vote. There are currently 13 terms open for vote including the 8 that are unchanged from last week.

           

          Code and Voting Results From Last Week

          Term

          Definition

          Alternative Terms
          (if any)

          Comment for application
          (if any)

          SEVCO:00273

          Passed

          study eligibility criteria not prespecified

          A bias in study eligibility criteria in which the criteria are not stated before the study selection process occurs.

          Failure to specify the study eligibility criteria before evaluating studies for selection can lead to a situation in which the data discovered during the study selection process influences the criteria for selection in a way that introduces systematic error.

          SEVCO:00274

          Re-Opened

          A bias in study eligibility criteria due to a mismatch between the inclusion and exclusion criteria and the research question.

          The mismatch between the inclusion and exclusion criteria and the research question may relate to differences in the population, exposures, or outcomes studies from those of interest.

          SEVCO:00275

          Passed

          study eligibility criteria ambiguous

          A bias in study eligibility criteria due to unclear specification.

          Eligibility criteria that are not sufficiently described to enable reproduction of study selection can introduce systematic error.

          SEVCO:00276

          Still Open 

          study eligibility criteria limits for study characteristics not appropriate

          A bias in study eligibility criteria that is specific to restrictions based on study characteristics.

          Any restrictions applied on the basis of study characteristics should not introduce a bias in study eligibility criteria. Examples of such restrictions may include criteria based on study size, study design, study quality, or date of publication.

          In the ROBIS tool used for risk of bias assessment of systematic reviews, there is a question (1.4 Were all restrictions in eligibility criteria based on study characteristics appropriate?) that is different from the one which refers to whether the eligibility criteria are appropriate to the review question. Therefore, a separate term is available in SEVCO.

          SEVCO:00277

          Still Open 

          study eligibility criteria limits for study report characteristics not appropriate

          A bias in study eligibility criteria that is specific to restrictions on the status, structure, language, or accessibility of the study report.

          Examples of study report characteristics include publication status (including preprints and unpublished data)., format, language, and availability of data. The ROBIS tool used for assessing risk of bias of systematic reviews includes a signaling question '1.5 Were any restrictions in eligibility criteria based on sources of information appropriate?'

          SEVCO:00395

          Still Open 

          bias in search strategy

          A study selection bias specific to the strategy used to identify potentially eligible studies.

          A qualitative research bias in which there is any mismatch among hypothesis, data collected, data analysis, and results interpretation.

          The term mismatch applies to an inappropriate or wrong or inadequate relationship.

          SEVCO:00029

          Still Open

          mixed methods research bias

          A bias specific to the alignment of design, conduct, analysis or reporting of qualitative research and quantitative research within the same research project.

          Mixed methods research is a research approach that combines both qualitative and quantitative research methods within a single study or research project. This methodology aims to provide a more comprehensive understanding of a research problem by integrating the strengths of both qualitative and quantitative research. Examples of mixed methods research include combining surveys with in-depth interviews, using quantitative data to identify patterns and trends followed by qualitative data to explore the underlying reasons and meanings, or incorporating qualitative findings to help interpret and validate quantitative results. Overall, mixed methods research provides a more holistic understanding of a research question by acknowledging and leveraging the strengths of both qualitative and quantitative approaches.

          SEVCO:00361

          Still Open

          bias in mixed methods research design

          A mixed methods research bias in which the mixed methods approach used in a study is not appropriate for the research question and problem.

          • inadequate rationale for mixed methods design

          This signaling question in the Mixed Methods Assessment Tool (MMAT) is 5.1. Is there an adequate rationale for using a mixed methods design to address the research question?

          Common mixed methods designs include:

          Convergent design The QUAL and QUAN components are usually (but not necessarily) concomitant. The purpose is to examine the same phenomenon by interpreting QUAL and QUAN results (bringing data analysis together at the interpretation stage), or by integrating QUAL and QUAN datasets (e.g., data on same cases), or by transforming data (e.g., quantization of qualitative data).

          Sequential explanatory design Results of the phase 1 - QUAN component inform the phase 2 - QUAL component. The purpose is to explain QUAN results using QUAL findings. E.g., the QUAN results guide the selection of QUAL data sources and data collection, and the QUAL findings contribute to the interpretation of QUAN results.

          Sequential exploratory design Results of the phase 1 - QUAL component inform the phase 2 - QUAN component. The purpose is to explore, develop and test an instrument (or taxonomy), or a conceptual framework (or theoretical model). E.g., the QUAL findings inform the QUAN data collection, and the QUAN results allow a statistical generalization of the QUAL findings.

          Key references: Creswell et al. (2011); Creswell and Plano Clark, (2017); O'Cathain (2010)

          Reference - Hong QN, Pluye P, Fàbregues S, Bartlett G, Boardman F, Cargo M, Dagenais P, Gagnon M-P, Griffiths F, Nicolau B, O’Cathain A, Rousseau M-C, Vedel I. Mixed Methods Appraisal Tool (MMAT), version 2018. Registration of Copyright (#1148552), Canadian Intellectual Property Office, Industry Canada. Accessed at: http://mixedmethodsappraisaltoolpublic.pbworks.com/w/file/fetch/127916259/MMAT_2018_criteria-manual_2018-08-01_ENG.pdf

          SEVCO:00362

          Still Open

          ineffective integration of qualitative and quantitative study components

          A mixed methods research bias in which the qualitative research and quantitative research components are not adequately combined.

          SEVCO:00363

          Still Open

          inappropriate interpretation of integration of qualitative and quantitative findings

          A mixed methods research bias in which the process of combining the results of the constituent analyses is flawed.

          This criterion is related to meta-inference, which is defined as the overall interpretations derived from integrating qualitative and quantitative findings (Teddlie and Tashakkori, 2009). Meta-inference occurs during the interpretation of the findings from the integration of the qualitative and quantitative components, and shows the added value of conducting a mixed methods study rather than having two separate studies. (Pluye et al 2018)

          SEVCO:00364

          Still Open

          inadequate handling of inconsistency between qualitative and quantitative findings

          A mixed methods research bias in which discrepancies in the results from the qualitative and quantitative components are not adequately addressed.

          When integrating the findings from the qualitative and quantitative components, divergences and inconsistencies (also called conflicts, contradictions, discordances, discrepancies, and dissonances) can be found. It is not sufficient to only report the divergences; they need to be explained. Different strategies to address the divergences have been suggested such as reconciliation, initiation, bracketing and exclusion (Pluye et al., 2009b). (Reference - Hong QN, Pluye P, Fàbregues S, Bartlett G, Boardman F, Cargo M, Dagenais P, Gagnon M-P, Griffiths F, Nicolau B, O’Cathain A, Rousseau M-C, Vedel I. Mixed Methods Appraisal Tool (MMAT), version 2018. Registration of Copyright (#1148552), Canadian Intellectual Property Office, Industry Canada. Accessed at: http://mixedmethodsappraisaltoolpublic.pbworks.com/w/file/fetch/127916259/MMAT_2018_criteria-manual_2018-08-01_ENG.pdf)

          SEVCO:00263

          Newly Open

          database search sources inadequate

          A bias in search strategy in which the electronic sources are not sufficient to find the studies available in electronic sources.

          The set of databases (electronic sources) expected to include the studies of interest will vary with the review topic.

          SEVCO:00264

          Newly Open

          non-database search sources inadequate

          A bias in search strategy in which the sources other than electronic database sources are not sufficient to find the studies available.

          The set of sources other than databases (electronic sources) expected to include the studies of interest will vary with the review topic.

          SEVCO:00265

          Newly Open

          search strategy not sensitive

          A bias in search strategy in which the search terms and combinations of search terms are not sufficient to find the studies available.

          The search terms and combinations of search terms expected to include the studies of interest will vary with the review topic.

           

           

           

          FEvIR Platform and Tools Development Updates:

          On February 15, the Computable EBM Tools Development Working Group reviewed progress on the GRADEpro-to-FEvIR Converter tool on the FEvIR platform. The group then discussed potential improvements to the Summary of Findings Table user interface https://fevir.net/resources/Composition/175771 , including whether to add absolute or relative estimates or both,  and reviewed other recent developments on the FEvIR platform.

          On February 12, the CQL Development (a CDS EBMonFHIR sub-WG) discussed potential real-world examples to be converted to CQL including: 

          HL7 Standards Development Updates:

          On February 15, the EBM Implementation Guide Working Group (a CDS EBMonFHIR sub-WG) discussed instructions and guidance for how to use the Group Resource to describe eligibility criteria. Expressing Eligibility Criteria using the Group (R6) Resource - Reference Page.

          On February 15, the HL7 Biomedical Research and Regulation (BR&R) Working Group also discussed the Expressing Eligibility Criteria using the Group (R6) Resource - Reference Page and added examples to explain how to add expressions and when an extension is needed.

          On February 15, the HL7 Structured Documents Working Group discussed FHIR trackers that we have been working to implement including https://jira.hl7.org/browse/FHIR-34407 and https://jira.hl7.org/browse/FHIR-38201

          On February 13, the StatisticsOnFHIR (a CDS EBMonFHIR sub-WG) reviewed progress on a ComparativeEvidenceReport for a real world evidence example https://fevir.net/resources/Project/195718 , this week's emphasis was how to represent the variables and evidence contained in a CONSORT diagram using profiles from the Evidence Based Medicine (EBMonFHIR) Implementation Guide. Discussion of this topic resulted in a request to change the naming of two profiles in the implementation guide for clarity and consistency https://jira.hl7.org/browse/FHIR-44740.

           

          Knowledge Ecosystem Liaison/Coordination Updates:

          On February 14, the Funding the Ecosystem Infrastructure Working Group created the following statements as a product definition for making guidelines computable:

          For (target customer) who (statement of the need or opportunity), the (product name) is a (product category) that (statement of key benefit – that is, compelling reason to buy)

          For (guideline developer) who (wants to share and store their content in computable form), the (Computable Publishing: Guideline Authoring Tool) is a (web-based user interface to enter data for automatic conversion to computable form) that (enables creation of standard computable content without having to learn technical specifications, i.e. enables creating computable content without any software engineering personnel). Creating computable content for guidelines is desired to make updating guidelines more efficient by limiting changes to only the concepts that need to change and coordinating the changes and packaging within linked content, make sharing guidelines easier to find by providing metadata that is used when searchers limit their searches, and make shared guidelines easier to use by providing the guideline content in a form that is interpretable by software tools for guideline development (adaptation) and guideline implementation.

          For (a clinical trial investigator) who (needs to report the trial protocol in computable form), the (Computable Publishing: Research Protocol Authoring Tool) is a (web-based user interface to enter data for automatic conversion to computable form) that (enables creation of standard computable content without having to learn technical specifications, i.e. enables creating computable content without any software engineering personnel). Creating computable content for research protocols is desired to make updating protocols more efficient by limiting changes to only the concepts that need to change and coordinating the changes and packaging within linked content, make reporting protocols easier by providing the report in forms meeting technical specifications, and make shared protocols easier to use by providing the protocol content in a form that is interpretable by software tools for trial management.

           

          On February 13, the Ontology Management Working Group discussed the following completed (or nearly completed) FHIR trackers which will support the work of multiple HL7 Working Groups. The group also discussed the upcoming HL7 Terminology freeze (on February 25).

          --- EBMonFHIR ---
          Successfully Built and just needs to be merged IN FHIR and need to get approval for merge in FHIR-extensions repos https://jira.hl7.org/browse/FHIR-42885

          --- BR&R ---
          This is almost done, we just need to coordinate the Terminology THO changes: https://jira.hl7.org/browse/FHIR-43561

          --- STRUCTURED DOCUMENTS ---
          Completed and merged
          https://jira.hl7.org/browse/FHIR-34407

          The aforementioned improvements to the FHIR specification were mentioned in the HL7 Biomedical Research and Regulation Working Group (BR&R) meeting today. The BR&R group also discussed coordination of the representation of participant flow measures with our HEvKA StatisticsOnFHIR (a CDS EBMonFHIR sub-WG) group.

           

          On February 16, the Grade Ontology Working Group found that the one term open for vote, Inconsistency,  did not pass (Vote 5-5). The group discussed the comments received on the ballot.

          The results of this discussion are as follows: 

          The GRADE Ontology Working Group discussed and determined that the concepts of 'systematic error' and 'random error' contributing to inconsistency are addressed by the Comment for application phrase including statistical heterogeneity and methodological heterogeneity. The term was redefined and has been re-opened for vote. If you have already voted on the term inconsistency, please vote again as the definition has been changed. 

          The group then discussed the term Indirectness. This term is now also newly open for vote.

          The meeting was recorded and can be viewed here

          Term

          Definition

          Alternative Terms
          (if any)

          Comment for application
          (if any)

          Inconsistency

          Variations in the findings from which the estimate of effect was derived.

          Variations may include differences across estimates from different studies or may include differences across estimates from different analyses (such as sensitivity analyses) within studies. Variations may include differences in the magnitude or direction of the findings (point estimates and confidence intervals). Inconsistency is one of the domains that can impact the rating of the certainty of evidence. Related terms for "inconsistency" include "heterogeneity", "statistical heterogeneity", and "clinical heterogeneity". Statistical heterogeneity, methodological heterogeneity, and clinical heterogeneity (differences in the population, intervention, comparison, or outcome addressed) can influence the degree of inconsistency of the findings. Any variations in findings may be considered inconsistency. The degree of inconsistency may range from being almost non-existent to large. Inconsistency may be explained or unexplained. The degree and explainability of inconsistency influences the rating of certainty of evidence, but not the definition of inconsistency itself.

          Indirectness

          Differences between any of the populations, the exposures or interventions, the comparators, or the outcomes measured in the studies or analyses that were considered to estimate the effect and those under consideration in a question of interest.

          The question of interest may vary with context, such as the key considerations for a guideline or systematic review. Indirectness is one of the domains that can impact the rating of the certainty of evidence. Related terms used by others for "directness" include "relevance", "external validity", and "applicability". Any differences may be considered indirectness. The degree of indirectness may range from being almost non-existent to large. Indirectness may be important or unimportant. The degree and importance of indirectness influences the rating of certainty of evidence, but not the definition of indirectness itself. "Indirectness in comparisons of exposures", in which A vs. C comparison is derived from an A vs. B study and a B vs. C study, is not the intended use of Indirectness.

          During the meeting, the GRADE Ontology Working Group decided to prepare an abstract for submission to Global Evidence Summit 2024, this will be distributed to the group for comments when ready. The deadline for submission is Wednesday, February 21, 2024.

           

          Research Development Updates:

          On February 13, the Measuring the Rate of Scientific Knowledge Transfer Working Group discussed the latest developments in the  FEvIR®: RAte of Dissemination Assessment Research (RADAR) Tool which will support our next research project. The group then added instructions for investigators to the software. An abstract of the anticipated research project will be submitted to the upcoming Global Evidence Summit 2, https://www.globalevidencesummit.org/. Abstracts are due next week. The group hopes to refine and submit the finished abstract in next week's meeting.  

           

          Releases on the FEvIR Platform:

          Computable Publishing®: Recommendation Authoring Tool version 0.12.1 (February 16, 2024) creates a Composition Resource with a Recommendation Profile and the associated Resources for a structured representation of a recommendation.

          Release 0.12.1 (February 16, 2024)  doesn't crash if the resource doesn't have a section element.

          Computable Publishing®: ClinicalTrials.gov-to-FEvIR Converter version 4.11.1 (February 16, 2024)  converts ClinicalTrials.gov JSON to FEvIR Resources in FHIR JSON.
                        Release 4.11.1 (February 16, 2024) changed the wording on the alert message.

          The FEvIR®: RAte of Dissemination Assessment Research (RADAR) Tool version 0.8.0 (February 16, 2024) facilitates research conduct and project management for an investigation to measure the rate of scientific knowledge transfer
                        Release 0.8.0 (February 16, 2024) now has instructions for how rating an index article. Changed the left navigation to now have "Instructions" instead of "Links."

          Quotes for Thought: 

          "Impossible standards just make life difficult" – Fortune Cookie

          "It is not the mountain we conquer but ourselves."- Sir Edmund Hillary

          "Love doesn't make the world go 'round. Love is what makes the ride worthwhile." – Franklin P. Jones

          “Turn your wounds into wisdom.” — Oprah Winfrey

          "Be like a postage stamp. Stick to a thing till you get there." — Josh Billings

           

          To get involved or stay informed: HEvKA Project Page on FEvIR Platform, HEvKA Project Page on HL7 Confluence, or join any of the groups that are now meeting in the following weekly schedule:

          Joanne Dehnbostel

          unread,
          Feb 20, 2024, 11:49:25 AM2/20/24
          to Health Evidence Knowledge Accelerator (HEvKA)

           

           

          6 people (BA, CE , HL, JD, KS, MA) participated in up to three active working group meetings. 

          The Project Management Working Group worked on FHIR changes and EBMonFHIR Implementation Guide issues.

          The Setting the Scientific Record on FHIR Working Group and the CQL Development (a CDS EBMonFHIR sub-WG) continued development of the GRADEpro-to-FEvIR Converter Tool on the FEvIR Platfom. 

          The Statistic Terminology Working Group found that the one term open for vote last week, calibration slope, did not receive enough votes for approval. It remains open for voting this week. The group then created a definition for measure of heterogeneity and moved it in the hierarchy under measure of dispersion. 

          Term

          Definition

          Alternative Terms
          (if any)

          Comment for application
          (if any)

          A measure of calibration that is the rate of change in the appropriately transformed value per unit change of the correspondingly transformed predicted value.

          calibration-in-the-small

          For calibration of binary outcome variables (0 or 1), the calibration slope is computed from a statistical model where the log odds of the predicted probabilities is a linear function of the empirical frequencies (logistical regression). The transformation is log odds (logit, the link function for a generalized linear model for the expected value of the outcome).

          For calibration of count outcome variables (0, 1, 2, 3, ...), the calibration slope may be computed from a statistical model where the log of the predicted mean counts is a linear function of the empirical frequencies determined by unique combinations of the covariates. The transformation is log, the link function, of the counts.

          There are other types of outcome variables for which the calibration slope may be obtained.

          Slopes further away from 1.0 indicate, at upper and lower values, over- or under-confidence in the prediction.

          measure of heterogeneity

          A statistic that represents the variation or spread among values in the set of estimates across studies.

          measure of statistical heterogeneity

          There are several types of heterogeneity (or diversity) which are important factors which determine whether or not evidence should be pooled. Clinical heterogeneity may refer to variations in the population, intervention, comparator, or outcome. Methodological heterogeneity may refer to variations in study design. Statistical heterogeneity is described here.

          A measure of dispersion is defined as a statistic that represents the variation or spread among data values in a dataset or data distribution. In the context of a meta-analysis, a measure of heterogeneity is a measure of dispersion in which the dataset is the set of estimates across studies.

          Quote for thought: "Worry is the interest paid by those who borrow trouble. " – George Washington

          Joanne Dehnbostel

          unread,
          Feb 21, 2024, 8:00:52 AM2/21/24
          to Health Evidence Knowledge Accelerator (HEvKA)

           

           

          6 people (BA, CE , HL, JD, KS, KW) participated today in up to three active working group meetings. 

          The Measuring the Rate of Scientific Knowledge Transfer Working Group reviewed an abstract for the upcoming Global Evidence Summit 2, https://www.globalevidencesummit.org/. The abstract deadline has been extended to February 29. 

          The StatisticsOnFHIR (a CDS EBMonFHIR sub-WG) continued to review progress on a ComparativeEvidenceReport for a real world evidence example https://fevir.net/resources/Project/195718 , and how to represent the variables and evidence contained in a CONSORT diagram using profiles from the Evidence Based Medicine (EBMonFHIR) Implementation Guide. The change request created in this meeting last week was discussed in today's HL7 Biomedical Research and Regulation Working Group meeting. https://jira.hl7.org/browse/FHIR-44740.

          The Ontology Management Working Group discussed a request by the Associate Editor from the journal, Evidence-Based Toxicology, asking for reviewers for an article titled "Translation of core terms of chemical risk assessment into the language of systematic review : research protocol". A preprint of the article can be seen on Zenodo. This project is very similar to our own SEVCO protocol so we may have expertise that is valuable to this project. If anyone is interested in reviewing this article, please let us know. 

          Quote for thought: "Kindness is the only service that will stand the storm of life and not wash out" – Abraham Lincoln

          Joanne Dehnbostel

          unread,
          Feb 21, 2024, 6:57:50 PM2/21/24
          to Health Evidence Knowledge Accelerator (HEvKA)

           

          7 people (BA, CE , JD, JO, KR, KS, MA) participated in up to 2 active working group meetings.

          The Funding the Ecosystem Infrastructure Working Group discussed a potential panel presentation for the Global Evidence Summitwhich would address all 6 programmatic domains of the conference and could cover something like:

          How can IT make all parts of the Evidence Ecosystem more efficient and effective, and what global collaborations are working together to make this a reality in the next year?

          Groups to be represented can include GIN (GINTech, GIN North America), GRADE Working Group, Health Evidence Knowledge Accelerator, and HL7 (EBMonFHIR).

           

          The Funding the Ecosystem Infrastructure Working Group then revised the following statements(started last week) as a product definition for making guidelines computable and for making clinical trial protocols computable:

          Template : For (target customer) who (statement of the need or opportunity), the (product name) is a (product category) that (statement of key benefit – that is, compelling reason to buy)

          Changed from: For (guideline developer) who (wants to share and store their content in computable form), the (Computable Publishing: Guideline Authoring Tool) is a (web-based user interface to enter information for automatic conversion to computable form) that (enables creation of standard computable content without having to learn technical specifications, i.e. enables creating computable content without any software engineering personnel). Creating computable content for guidelines is desired to make updating guidelines more efficient by limiting changes to only the concepts that need to change and coordinating the changes and packaging with linked content, make sharing guidelines easier to find by providing metadata that is used when searchers limit their searches, and make shared guidelines easier to use by providing the guideline content in a form that is interpretable by software tools for guideline development (adaptation) and guideline implementation.

          To: For (guideline developer) who (needs to make guidelines easier to update, find, and use), the (Computable Publishing: Guideline Authoring Tool) is a (web-based user interface to enter information for automatic conversion to computable form) that (uses an interoperable standard).

          For (a clinical trial investigator) who (needs to report the trial protocol in computable form), the (Computable Publishing: Research Protocol Authoring Tool) is a (web-based user interface to enter data for automatic conversion to computable form) that (enables creation of standard computable content without having to learn technical specifications, i.e. enables creating computable content without any software engineering personnel). Creating computable content for research protocols is desired to make updating protocols more efficient by limiting changes to only the concepts that need to change and coordinating the changes and packaging within linked content, make reporting protocols easier by providing the report in forms meeting technical specifications, and make shared protocols easier to use by providing the protocol content in a form that is interpretable by software tools for trial management.

          The Communications Working Group continued to discuss two abstracts in progress for the Global Evidence Summit, now due February 29, 2024. The group then "brainstormed" ideas for a potential workshop for AMIA 2024 which will take place in November in San Francisco, CA, the theme for the AMIA conference is Informatics in the Age of Generative Artificial Intelligence, submissions are due March 18, 2024.

          Quote for thought: “Late February days; and now, at last, might you have thought that winter’s woe was past; so fair the sky was and so soft the air.”— William Morris

          Joanne Dehnbostel

          unread,
          Feb 23, 2024, 8:52:02 AM2/23/24
          to Health Evidence Knowledge Accelerator (HEvKA)

           

           

          7 people (BA, GL, IK, JD, JW, KS, MH) participated in up to 2 active working group meetings.

          The EBM Implementation Guide Working Group (a CDS EBMonFHIR sub-WG) worked to respond to IG Ballot feedback including adding helpful links to navigate through the pages of the IG. The group also discussed how to represent CONSORT diagrams in FHIR resources using profiles from the IG and how to name these profiles for clarity.  

          The Computable EBM Tools Development working Group reviewed GRADEpro-to-FEvIR Converter progress and gave feedback on the user interface for the new software tool under development.

          The HL7 Biomedical Research and Regulation Working Group (BR&R) continued to discuss how to represent eligibility criteria using the Group Resource and added to the following reference page: Expressing Eligibility Criteria using the Group (R6) Resource - Reference Page . 

          Quote for thought: "Resources mean less than resourcefulness" --puzzle book

          Joanne Dehnbostel

          unread,
          Feb 24, 2024, 11:14:25 PM2/24/24
          to Health Evidence Knowledge Accelerator (HEvKA)

          12 people (BA, CA-D, HK, HL, JD, KS, KW, MA, PW, SL, SM, SS) participated today in three active working group meetings.

           

          The Risk of Bias Terminology Working Group found that of the 13 terms open for vote last week, 7 passed, 5 did not have enough votes and are still open, and one did not pass but was refined based on comments received and was re-opened for vote. There are currently 6 terms open for vote. 

          Code and Voting Results From Last Week

          Term

          Definition

          Alternative Terms
          (if any)

          Comment for application
          (if any)

          A bias in study eligibility criteria due to a mismatch between the inclusion and exclusion criteria and the research question.

          The mismatch between the inclusion and exclusion criteria and the research question may relate to differences in the population, exposures, or outcomes studies from those of interest.

          SEVCO:00276

          Re-opened

          A bias in study eligibility criteria that is specific to restrictions based on characteristics of the study design, conduct, or findings.

          Any restrictions applied on the basis of study characteristics should not introduce a bias in study eligibility criteria. Examples of such restrictions may include criteria based on study size, study design, study quality, or date when the study was conducted.

          In the ROBIS tool used for risk of bias assessment of systematic reviews, there is a question (1.4 Were all restrictions in eligibility criteria based on study characteristics appropriate?) that is different from the one which refers to whether the eligibility criteria are appropriate to the review question. Therefore, a separate term is available in SEVCO.

          SEVCO:00263

          Still Open

          database search sources inadequate

          A bias in search strategy in which the electronic sources are not sufficient to find the studies available in electronic sources.

          The set of databases (electronic sources) expected to include the studies of interest will vary with the review topic.

          SEVCO:00264

          Still Open

          non-database search sources inadequate

          A bias in search strategy in which the sources other than electronic database sources are not sufficient to find the studies available.

          The set of sources other than databases (electronic sources) expected to include the studies of interest will vary with the review topic.

          SEVCO:00265

          Still Open

          search strategy not sensitive

          A bias in search strategy in which the search terms and combinations of search terms are not sufficient to find the studies available.

          The search terms and combinations of search terms expected to include the studies of interest will vary with the review topic.

          SEVCO:00360

          Still Open

          incoherence among qualitative data, analysis, and interpretation

          A qualitative research bias in which there is any mismatch among hypothesis, data collected, data analysis, and results interpretation.

          The term mismatch applies to an inappropriate or wrong or inadequate relationship.

          SEVCO:00277

          Passed 

          study eligibility criteria limits for study report characteristics not appropriate

          A bias in study eligibility criteria that is specific to restrictions on the status, structure, language, or accessibility of the study report.

          Examples of study report characteristics include publication status (including preprints and unpublished data)., format, language, and availability of data. The ROBIS tool used for assessing risk of bias of systematic reviews includes a signaling question '1.5 Were any restrictions in eligibility criteria based on sources of information appropriate?'

          SEVCO:00395

          Passed 

          bias in search strategy

          A study selection bias specific to the strategy used to identify potentially eligible studies.

          SEVCO:00029

          Passed

          mixed methods research bias

          A bias specific to the alignment of design, conduct, analysis or reporting of qualitative research and quantitative research within the same research project.

          Mixed methods research is a research approach that combines both qualitative and quantitative research methods within a single study or research project. This methodology aims to provide a more comprehensive understanding of a research problem by integrating the strengths of both qualitative and quantitative research. Examples of mixed methods research include combining surveys with in-depth interviews, using quantitative data to identify patterns and trends followed by qualitative data to explore the underlying reasons and meanings, or incorporating qualitative findings to help interpret and validate quantitative results. Overall, mixed methods research provides a more holistic understanding of a research question by acknowledging and leveraging the strengths of both qualitative and quantitative approaches.

          SEVCO:00361

          Passed

          bias in mixed methods research design

          A mixed methods research bias in which the mixed methods approach used in a study is not appropriate for the research question and problem.

          • inadequate rationale for mixed methods design

          This signaling question in the Mixed Methods Assessment Tool (MMAT) is 5.1. Is there an adequate rationale for using a mixed methods design to address the research question?

          Common mixed methods designs include:

          Convergent design The QUAL and QUAN components are usually (but not necessarily) concomitant. The purpose is to examine the same phenomenon by interpreting QUAL and QUAN results (bringing data analysis together at the interpretation stage), or by integrating QUAL and QUAN datasets (e.g., data on same cases), or by transforming data (e.g., quantization of qualitative data).

          Sequential explanatory design Results of the phase 1 - QUAN component inform the phase 2 - QUAL component. The purpose is to explain QUAN results using QUAL findings. E.g., the QUAN results guide the selection of QUAL data sources and data collection, and the QUAL findings contribute to the interpretation of QUAN results.

          Sequential exploratory design Results of the phase 1 - QUAL component inform the phase 2 - QUAN component. The purpose is to explore, develop and test an instrument (or taxonomy), or a conceptual framework (or theoretical model). E.g., the QUAL findings inform the QUAN data collection, and the QUAN results allow a statistical generalization of the QUAL findings.

          Key references: Creswell et al. (2011); Creswell and Plano Clark, (2017); O'Cathain (2010)

          Reference - Hong QN, Pluye P, Fàbregues S, Bartlett G, Boardman F, Cargo M, Dagenais P, Gagnon M-P, Griffiths F, Nicolau B, O’Cathain A, Rousseau M-C, Vedel I. Mixed Methods Appraisal Tool (MMAT), version 2018. Registration of Copyright (#1148552), Canadian Intellectual Property Office, Industry Canada. Accessed at: http://mixedmethodsappraisaltoolpublic.pbworks.com/w/file/fetch/127916259/MMAT_2018_criteria-manual_2018-08-01_ENG.pdf

          SEVCO:00362

          Passed

          ineffective integration of qualitative and quantitative study components

          A mixed methods research bias in which the qualitative research and quantitative research components are not adequately combined.

          SEVCO:00363

          Passed

          inappropriate interpretation of integration of qualitative and quantitative findings

          A mixed methods research bias in which the process of combining the results of the constituent analyses is flawed.

          This criterion is related to meta-inference, which is defined as the overall interpretations derived from integrating qualitative and quantitative findings (Teddlie and Tashakkori, 2009). Meta-inference occurs during the interpretation of the findings from the integration of the qualitative and quantitative components, and shows the added value of conducting a mixed methods study rather than having two separate studies. (Pluye et al 2018)

          SEVCO:00364

          Passed

          inadequate handling of inconsistency between qualitative and quantitative findings

          A mixed methods research bias in which discrepancies in the results from the qualitative and quantitative components are not adequately addressed.

          When integrating the findings from the qualitative and quantitative components, divergences and inconsistencies (also called conflicts, contradictions, discordances, discrepancies, and dissonances) can be found. It is not sufficient to only report the divergences; they need to be explained. Different strategies to address the divergences have been suggested such as reconciliation, initiation, bracketing and exclusion (Pluye et al., 2009b). (Reference - Hong QN, Pluye P, Fàbregues S, Bartlett G, Boardman F, Cargo M, Dagenais P, Gagnon M-P, Griffiths F, Nicolau B, O’Cathain A, Rousseau M-C, Vedel I. Mixed Methods Appraisal Tool (MMAT), version 2018. Registration of Copyright (#1148552), Canadian Intellectual Property Office, Industry Canada. Accessed at: http://mixedmethodsappraisaltoolpublic.pbworks.com/w/file/fetch/127916259/MMAT_2018_criteria-manual_2018-08-01_ENG.pdf)

           

          The Grade Ontology Working Group found that one term open for vote, Inconsistency,  received 10 affirmative votes (vote 10-0) and was added to the GRADE Ontology.

          The term Indirectness, received 8 affirmative votes (8-0) but we received a comment recommending that the definition be simplified. The definition was changed after a group discussion, and because the definition has changed the term has been re-opened for vote.

           If you have already voted on the term, Indirectness, please vote again as the definition has been changed. 

          Term

          Definition

          Alternative Terms
          (if any)

          Comment for application
          (if any)

          Indirectness

          1. Differences between any of the characteristics of the question of interest (i.e., populations, exposures, interventions, comparators, or outcomes) and the characteristics of the evidence. 2. The degree to which the results of a study apply to a target context outside of that study.

          The question of interest may vary with context, such as the key considerations for a guideline or systematic review. Indirectness is one of the domains that can impact the rating of the certainty of evidence. Related terms used by others for "directness" include "relevance", "external validity", and "applicability". Directness is the extent to which the results of an experimental or observational study apply to a target context outside of that study. Any differences may be considered indirectness. The degree of indirectness may range from being almost non-existent to large. Indirectness may be important or unimportant. The degree and importance of indirectness influences the rating of certainty of evidence, but not the definition of indirectness itself. In a network meta-analysis, an indirect effect estimate describes an A vs. C comparison that is derived from A vs. B studies and B vs. C studies. This use of the term "indirect" is not the intended use of Indirectness.

           

          During the meeting, the GRADE Ontology Working Group discussed an abstract for submission to Global Evidence Summit 2024. The deadline for submission has been extended to February 29, 2024.

          The Project Management Working Group prepared next week's proposed agenda:

          Day/Time (Eastern)

          Working Group

          Agenda Items

          Monday 8-9 am

          Project Management

          FHIR changes and EBMonFHIR Implementation Guide issues

          Monday 9-10 am

          Setting the Scientific Record on FHIR

          Development of GRADEpro-to-FEvIR Converter

          Monday 10-11 am

          CQL Development (a CDS EBMonFHIR sub-WG)

          Develop CQL for StudyEligibilityCriteria: Adolescents with non-syndromic obesity

          Monday 2-3 pm

          Statistic Terminology

          SEVCO terms for measures of calibration (2 terms open for vote, Calibration Slope, Measure of Heterogeneity)

          Tuesday 9 am-10 am

          Measuring the Rate of Scientific Knowledge Transfer

          Canceled to allow participation in GIN meeting Empowering global guideline adaptability: Exploring shortcomings of BMJ Rapid Recommendations and strategies for global guidelines that are easier to adapt. SKAF Board Meeting moved to Wednesday)

          Tuesday 2-3 pm

          StatisticsOnFHIR (a CDS EBMonFHIR sub-WG)

          Review progress on ComparativeEvidenceReport example

          Tuesday 3-4 pm

          Ontology Management

          Review objectives and priorities, including Global Evidence Summit abstract submission for GRADE Ontology

          Wednesday 8-9 am

          Funding the Ecosystem Infrastructure

          Review product definition for making guidelines computable

          Wednesday 9-10 am

          Communications (Awareness, Scholarly Publications)

          Publications (Study Design Terminology), Presentations (Global Evidence Summit 2, AMIA), Website

          Wednesday 11-12 am

          SKAF Board Meeting (Note Time Change)

          SKAF Board Agenda

          Thursday 8-9 am

          EBM Implementation Guide (a CDS EBMonFHIR sub-WG)

          Review IG Ballot feedback, Prepare EBMonFHIR IG CodeSystem for HL7 terminology

          Thursday 9-10 am

          Computable EBM Tools Development

          Review GRADEpro-to-FEvIR Converter progress

          Friday 9-10 am

          Risk of Bias Terminology

          Review SEVCO terms for qualitative research bias and study selection bias (6 terms open for vote)

          Friday 10-11 am

          GRADE Ontology

          Term development (Indirectness - 1 term open for vote)

          Friday 12-1 pm

          Project Management

          Prepare Weekly Agenda

          Releases on the FEvIR Platform:

          An Associated Resources section that shows a list of all the resources referenced in that resource was added to all of the following:
            Comparative Evidence Report Viewer/Builder 0.15.0 : https://fevir.net/resources/Project/176002
            Composition Viewer/Builder 0.16.0 : https://fevir.net/resources/Project/103697
            EvidenceReport Viewer/Builder 0.15.0 : https://fevir.net/resources/Project/176002
            Measure Viewer/Builder 0.2.0 : https://fevir.net/resources/Project/114482
            ResearchStudy Viewer/Builder 0.7.0 : https://fevir.net/resources/Project/29882
            Summary Of Findings Authoring Tool 0.18.0 : https://fevir.net/resources/Project/114899

          Quote for thought: 'Half of 8 is 3 if you pick the right half.' -- math.answers.com

          Joanne Dehnbostel

          unread,
          Feb 26, 2024, 8:57:56 AM2/26/24
          to Health Evidence Knowledge Accelerator (HEvKA), Health Evidence Knowledge Accelerator (HEvKA) Weekly Update

           

           

          Project Coordination Updates

           

          18 people (BA, CA-D, CE , GL, HK, HL, IK, JD, JO, KR, KS, KW, MA, MH, PW, SL, SM, SS) from 8 countries (Belgium, Canada, China, Finland, Germany, Peru, UK, USA) participated this week in up to 14 active working group meetings.

          On February 19 The Project Management Working Group worked on FHIR changes and EBMonFHIR Implementation Guide issues.

          On February 23 The Project Management Working Group prepared next week's proposed agenda:

          Day/Time (Eastern)

          Working Group

          Agenda Items

          Monday 8-9 am

          Project Management

          FHIR changes and EBMonFHIR Implementation Guide issues

          Monday 9-10 am

          Setting the Scientific Record on FHIR

          Development of GRADEpro-to-FEvIR Converter

          Monday 10-11 am

          CQL Development (a CDS EBMonFHIR sub-WG)

          Develop CQL for StudyEligibilityCriteria: Adolescents with non-syndromic obesity

          Monday 2-3 pm

          Statistic Terminology

          SEVCO terms for measures of calibration (2 terms open for vote, Calibration Slope, Measure of Heterogeneity)

          Tuesday 9 am-10 am

          Measuring the Rate of Scientific Knowledge Transfer

          Canceled to allow participation in GIN meeting Empowering global guideline adaptability: Exploring shortcomings of BMJ Rapid Recommendations and strategies for global guidelines that are easier to adapt. SKAF Board Meeting moved to Wednesday)

          Tuesday 2-3 pm

          StatisticsOnFHIR (a CDS EBMonFHIR sub-WG)

          Review progress on ComparativeEvidenceReport example

          Tuesday 3-4 pm

          Ontology Management

          Review objectives and priorities, including Global Evidence Summit abstract submission for GRADE Ontology

          Wednesday 8-9 am

          Funding the Ecosystem Infrastructure

          Review product definition for making guidelines computable

          Wednesday 9-10 am

          Communications (Awareness, Scholarly Publications)

          Publications (Study Design Terminology), Presentations (Global Evidence Summit 2, AMIA), Website

          Wednesday 11-12 am

          SKAF Board Meeting (Note Time Change)

          SKAF Board Agenda

          Thursday 8-9 am

          EBM Implementation Guide (a CDS EBMonFHIR sub-WG)

          Review IG Ballot feedback, Prepare EBMonFHIR IG CodeSystem for HL7 terminology

          Thursday 9-10 am

          Computable EBM Tools Development

          Review GRADEpro-to-FEvIR Converter progress

          Friday 9-10 am

          Risk of Bias Terminology

          Review SEVCO terms for qualitative research bias and study selection bias (6 terms open for vote)

          Friday 10-11 am

          GRADE Ontology

          Term development (Indirectness - 1 term open for vote)

          Friday 12-1 pm

          Project Management

          Prepare Weekly Agenda

          The following FHIR changes were applied https://jira.hl7.org/browse/FHIR-43328https://jira.hl7.org/browse/FHIR-41632.

          On February 19, the Setting the Scientific Record on FHIR Working Group continued development of the GRADEpro-to-FEvIR Converter Tool on the FEvIR Platfom. 

          On February 21,the Communications Working Group continued to discuss two abstracts in progress for the Global Evidence Summit, now due February 29, 2024. The group then "brainstormed" ideas for a potential workshop for AMIA 2024 which will take place in November in San Francisco, CA, the theme for the AMIA conference is Informatics in the Age of Generative Artificial Intelligence, submissions are due March 18, 2024.

           

          SEVCO Updates

          On February 19, the Statistic Terminology Working Group found that the one term open for vote last week, calibration slope, did not receive enough votes for approval. It remains open for voting this week. The group then created a definition for measure of heterogeneity and moved it in the hierarchy under measure of dispersion. 

          Term

          Definition

          Alternative Terms
          (if any)

          Comment for application
          (if any)

          calibration slope

          A measure of calibration that is the rate of change in the appropriately transformed value per unit change of the correspondingly transformed predicted value.

          calibration-in-the-small

          For calibration of binary outcome variables (0 or 1), the calibration slope is computed from a statistical model where the log odds of the predicted probabilities is a linear function of the empirical frequencies (logistical regression). The transformation is log odds (logit, the link function for a generalized linear model for the expected value of the outcome).

          For calibration of count outcome variables (0, 1, 2, 3, ...), the calibration slope may be computed from a statistical model where the log of the predicted mean counts is a linear function of the empirical frequencies determined by unique combinations of the covariates. The transformation is log, the link function, of the counts.

          There are other types of outcome variables for which the calibration slope may be obtained.

          Slopes further away from 1.0 indicate, at upper and lower values, over- or under-confidence in the prediction.

          A statistic that represents the variation or spread among values in the set of estimates across studies.

          measure of statistical heterogeneity

          There are several types of heterogeneity (or diversity) which are important factors which determine whether or not evidence should be pooled. Clinical heterogeneity may refer to variations in the population, intervention, comparator, or outcome. Methodological heterogeneity may refer to variations in study design. Statistical heterogeneity is described here.

          A measure of dispersion is defined as a statistic that represents the variation or spread among data values in a dataset or data distribution. In the context of a meta-analysis, a measure of heterogeneity is a measure of dispersion in which the dataset is the set of estimates across studies.

           

          The Risk of Bias Terminology Working Group found that of the 13 terms open for vote last week, 7 passed, 5 did not have enough votes and are still open, and one did not pass but was refined based on comments received and was re-opened for vote. There are currently 6 terms open for vote. 

          Code and Voting Results From Last Week

          Term

          Definition

          Alternative Terms
          (if any)

          Comment for application
          (if any)

          A bias in study eligibility criteria due to a mismatch between the inclusion and exclusion criteria and the research question.

          The mismatch between the inclusion and exclusion criteria and the research question may relate to differences in the population, exposures, or outcomes studies from those of interest.

          SEVCO:00276

          Re-opened

          A bias in study eligibility criteria that is specific to restrictions based on characteristics of the study design, conduct, or findings.

          Any restrictions applied on the basis of study characteristics should not introduce a bias in study eligibility criteria. Examples of such restrictions may include criteria based on study size, study design, study quality, or date when the study was conducted.

          In the ROBIS tool used for risk of bias assessment of systematic reviews, there is a question (1.4 Were all restrictions in eligibility criteria based on study characteristics appropriate?) that is different from the one which refers to whether the eligibility criteria are appropriate to the review question. Therefore, a separate term is available in SEVCO.

          SEVCO:00263

          Still Open

          database search sources inadequate

          A bias in search strategy in which the electronic sources are not sufficient to find the studies available in electronic sources.

          The set of databases (electronic sources) expected to include the studies of interest will vary with the review topic.

          SEVCO:00264

          Still Open

          non-database search sources inadequate

          A bias in search strategy in which the sources other than electronic database sources are not sufficient to find the studies available.

          The set of sources other than databases (electronic sources) expected to include the studies of interest will vary with the review topic.

          SEVCO:00265

          Still Open

          search strategy not sensitive

          A bias in search strategy in which the search terms and combinations of search terms are not sufficient to find the studies available.

          The search terms and combinations of search terms expected to include the studies of interest will vary with the review topic.

          SEVCO:00360

          Still Open

          incoherence among qualitative data, analysis, and interpretation

          A qualitative research bias in which there is any mismatch among hypothesis, data collected, data analysis, and results interpretation.

          The term mismatch applies to an inappropriate or wrong or inadequate relationship.

          SEVCO:00277

          Passed 

          study eligibility criteria limits for study report characteristics not appropriate

          A bias in study eligibility criteria that is specific to restrictions on the status, structure, language, or accessibility of the study report.

          Examples of study report characteristics include publication status (including preprints and unpublished data)., format, language, and availability of data. The ROBIS tool used for assessing risk of bias of systematic reviews includes a signaling question '1.5 Were any restrictions in eligibility criteria based on sources of information appropriate?'

          SEVCO:00395

          Passed 

          bias in search strategy

          A study selection bias specific to the strategy used to identify potentially eligible studies.

          SEVCO:00029

          Passed

          mixed methods research bias

          A bias specific to the alignment of design, conduct, analysis or reporting of qualitative research and quantitative research within the same research project.

          Mixed methods research is a research approach that combines both qualitative and quantitative research methods within a single study or research project. This methodology aims to provide a more comprehensive understanding of a research problem by integrating the strengths of both qualitative and quantitative research. Examples of mixed methods research include combining surveys with in-depth interviews, using quantitative data to identify patterns and trends followed by qualitative data to explore the underlying reasons and meanings, or incorporating qualitative findings to help interpret and validate quantitative results. Overall, mixed methods research provides a more holistic understanding of a research question by acknowledging and leveraging the strengths of both qualitative and quantitative approaches.

          SEVCO:00361

          Passed

          bias in mixed methods research design

          A mixed methods research bias in which the mixed methods approach used in a study is not appropriate for the research question and problem.

          inadequate rationale for mixed methods design

          This signaling question in the Mixed Methods Assessment Tool (MMAT) is 5.1. Is there an adequate rationale for using a mixed methods design to address the research question?

          Common mixed methods designs include:

          Convergent design The QUAL and QUAN components are usually (but not necessarily) concomitant. The purpose is to examine the same phenomenon by interpreting QUAL and QUAN results (bringing data analysis together at the interpretation stage), or by integrating QUAL and QUAN datasets (e.g., data on same cases), or by transforming data (e.g., quantization of qualitative data).

          Sequential explanatory design Results of the phase 1 - QUAN component inform the phase 2 - QUAL component. The purpose is to explain QUAN results using QUAL findings. E.g., the QUAN results guide the selection of QUAL data sources and data collection, and the QUAL findings contribute to the interpretation of QUAN results.

          Sequential exploratory design Results of the phase 1 - QUAL component inform the phase 2 - QUAN component. The purpose is to explore, develop and test an instrument (or taxonomy), or a conceptual framework (or theoretical model). E.g., the QUAL findings inform the QUAN data collection, and the QUAN results allow a statistical generalization of the QUAL findings.

          Key references: Creswell et al. (2011); Creswell and Plano Clark, (2017); O'Cathain (2010)

          Reference - Hong QN, Pluye P, Fàbregues S, Bartlett G, Boardman F, Cargo M, Dagenais P, Gagnon M-P, Griffiths F, Nicolau B, O’Cathain A, Rousseau M-C, Vedel I. Mixed Methods Appraisal Tool (MMAT), version 2018. Registration of Copyright (#1148552), Canadian Intellectual Property Office, Industry Canada. Accessed at: http://mixedmethodsappraisaltoolpublic.pbworks.com/w/file/fetch/127916259/MMAT_2018_criteria-manual_2018-08-01_ENG.pdf

          SEVCO:00362

          Passed

          ineffective integration of qualitative and quantitative study components

          A mixed methods research bias in which the qualitative research and quantitative research components are not adequately combined.

          SEVCO:00363

          Passed

          inappropriate interpretation of integration of qualitative and quantitative findings

          A mixed methods research bias in which the process of combining the results of the constituent analyses is flawed.

          This criterion is related to meta-inference, which is defined as the overall interpretations derived from integrating qualitative and quantitative findings (Teddlie and Tashakkori, 2009). Meta-inference occurs during the interpretation of the findings from the integration of the qualitative and quantitative components, and shows the added value of conducting a mixed methods study rather than having two separate studies. (Pluye et al 2018)

          SEVCO:00364

          Passed

          inadequate handling of inconsistency between qualitative and quantitative findings

          A mixed methods research bias in which discrepancies in the results from the qualitative and quantitative components are not adequately addressed.

          When integrating the findings from the qualitative and quantitative components, divergences and inconsistencies (also called conflicts, contradictions, discordances, discrepancies, and dissonances) can be found. It is not sufficient to only report the divergences; they need to be explained. Different strategies to address the divergences have been suggested such as reconciliation, initiation, bracketing and exclusion (Pluye et al., 2009b). (Reference - Hong QN, Pluye P, Fàbregues S, Bartlett G, Boardman F, Cargo M, Dagenais P, Gagnon M-P, Griffiths F, Nicolau B, O’Cathain A, Rousseau M-C, Vedel I. Mixed Methods Appraisal Tool (MMAT), version 2018. Registration of Copyright (#1148552), Canadian Intellectual Property Office, Industry Canada. Accessed at: http://mixedmethodsappraisaltoolpublic.pbworks.com/w/file/fetch/127916259/MMAT_2018_criteria-manual_2018-08-01_ENG.pdf)

          FEvIR Platform and Tools Development Updates:

          On February 19, the CQL Development (a CDS EBMonFHIR sub-WG) continued development of the GRADEpro-to-FEvIR Converter Tool on the FEvIR Platfom. 

          On February 22, he Computable EBM Tools Development working Group reviewed GRADEpro-to-FEvIR Converter progress and gave feedback on the user interface for the new software tool under development.

          HL7 Standards Development Updates:

          On February 20, the StatisticsOnFHIR (a CDS EBMonFHIR sub-WG) continued to review progress on a ComparativeEvidenceReport for a real world evidence example https://fevir.net/resources/Project/195718 , and how to represent the variables and evidence contained in a CONSORT diagram using profiles from the Evidence Based Medicine (EBMonFHIR) Implementation Guide. The change request created in this meeting last week was discussed in today's HL7 Biomedical Research and Regulation Working Group meeting. https://jira.hl7.org/browse/FHIR-44740.

          On February 22, the EBM Implementation Guide Working Group (a CDS EBMonFHIR sub-WG) worked to respond to IG Ballot feedback including adding helpful links to navigate through the pages of the IG. The group also discussed how to represent CONSORT diagrams in FHIR resources using profiles from the IG and how to name these profiles for clarity. 

          On February 22, the HL7 Biomedical Research and Regulation Working Group (BR&R) continued to discuss how to represent eligibility criteria using the Group Resource and added to the following reference page: Expressing Eligibility Criteria using the Group (R6) Resource - Reference Page .  

          Knowledge Ecosystem Liaison/Coordination Updates:

          On February 21, the Funding the Ecosystem Infrastructure Working Group discussed a potential panel presentation for the Global Evidence Summitwhich would address all 6 programmatic domains of the conference and could cover something like:

          How can IT make all parts of the Evidence Ecosystem more efficient and effective, and what global collaborations are working together to make this a reality in the next year?

          Groups to be represented can include GIN (GINTech, GIN North America), GRADE Working Group, Health Evidence Knowledge Accelerator, and HL7 (EBMonFHIR).

          The Funding the Ecosystem Infrastructure Working Group then revised the following statements(started last week) as a product definition for making guidelines computable and for making clinical trial protocols computable:

          Template : For (target customer) who (statement of the need or opportunity), the (product name) is a (product category) that (statement of key benefit – that is, compelling reason to buy)

          Changed from: For (guideline developer) who (wants to share and store their content in computable form), the (Computable Publishing: Guideline Authoring Tool) is a (web-based user interface to enter information for automatic conversion to computable form) that (enables creation of standard computable content without having to learn technical specifications, i.e. enables creating computable content without any software engineering personnel). Creating computable content for guidelines is desired to make updating guidelines more efficient by limiting changes to only the concepts that need to change and coordinating the changes and packaging with linked content, make sharing guidelines easier to find by providing metadata that is used when searchers limit their searches, and make shared guidelines easier to use by providing the guideline content in a form that is interpretable by software tools for guideline development (adaptation) and guideline implementation.

          To: For (guideline developer) who (needs to make guidelines easier to update, find, and use), the (Computable Publishing: Guideline Authoring Tool) is a (web-based user interface to enter information for automatic conversion to computable form) that (uses an interoperable standard).

          For (a clinical trial investigator) who (needs to report the trial protocol in computable form), the (Computable Publishing: Research Protocol Authoring Tool) is a (web-based user interface to enter data for automatic conversion to computable form) that (enables creation of standard computable content without having to learn technical specifications, i.e. enables creating computable content without any software engineering personnel). Creating computable content for research protocols is desired to make updating protocols more efficient by limiting changes to only the concepts that need to change and coordinating the changes and packaging within linked content, make reporting protocols easier by providing the report in forms meeting technical specifications, and make shared protocols easier to use by providing the protocol content in a form that is interpretable by software tools for trial management.

          On February 20, the Ontology Management Working Group discussed a request by the Associate Editor from the journal, Evidence-Based Toxicology, asking for reviewers for an article titled "Translation of core terms of chemical risk assessment into the language of systematic review : research protocol". A preprint of the article can be seen on Zenodo. This project is very similar to our own SEVCO protocol so we may have expertise that is valuable to this project. If anyone is interested in reviewing this article, please let us know. 

          On February 23, the Grade Ontology Working Group found that one term open for vote, Inconsistency,  received 10 affirmative votes (vote 10-0) and was added to the GRADE Ontology.

          The term Indirectness, received 8 affirmative votes (8-0) but we received a comment recommending that the definition be simplified. The definition was changed after a group discussion, and because the definition has changed the term has been re-opened for vote.

           If you have already voted on the term, Indirectness, please vote again as the definition has been changed. 

          Term

          Definition

          Alternative Terms
          (if any)

          Comment for application
          (if any)

          1. Differences between any of the characteristics of the question of interest (i.e., populations, exposures, interventions, comparators, or outcomes) and the characteristics of the evidence. 2. The degree to which the results of a study apply to a target context outside of that study.

          The question of interest may vary with context, such as the key considerations for a guideline or systematic review. Indirectness is one of the domains that can impact the rating of the certainty of evidence. Related terms used by others for "directness" include "relevance", "external validity", and "applicability". Directness is the extent to which the results of an experimental or observational study apply to a target context outside of that study. Any differences may be considered indirectness. The degree of indirectness may range from being almost non-existent to large. Indirectness may be important or unimportant. The degree and importance of indirectness influences the rating of certainty of evidence, but not the definition of indirectness itself. In a network meta-analysis, an indirect effect estimate describes an A vs. C comparison that is derived from A vs. B studies and B vs. C studies. This use of the term "indirect" is not the intended use of Indirectness.

           

          During the meeting, the GRADE Ontology Working Group discussed an abstract for submission to Global Evidence Summit 2024. The deadline for submission has been extended to February 29, 2024.

          Research Development Updates:

          On February 20, the Measuring the Rate of Scientific Knowledge Transfer Working Group reviewed an abstract for the upcoming Global Evidence Summit 2, https://www.globalevidencesummit.org/. The abstract deadline has been extended to February 29. 

          Releases on the FEvIR Platform:

          An Associated Resources section that shows a list of all the resources referenced in that resource was added to all of the following:


            Comparative Evidence Report Viewer/Builder 0.15.0 : https://fevir.net/resources/Project/176002
            Composition Viewer/Builder 0.16.0 : https://fevir.net/resources/Project/103697
            EvidenceReport Viewer/Builder 0.15.0 : https://fevir.net/resources/Project/176002
            Measure Viewer/Builder 0.2.0 : https://fevir.net/resources/Project/114482
            ResearchStudy Viewer/Builder 0.7.0 : https://fevir.net/resources/Project/29882
            Summary Of Findings Authoring Tool 0.18.0 : https://fevir.net/resources/Project/114899

           

          Quotes for Thought: 

           "Worry is the interest paid by those who borrow trouble. " – George Washington

           "Kindness is the only service that will stand the storm of life and not wash out" – Abraham Lincoln

          “Late February days; and now, at last, might you have thought that winter’s woe was past; so fair the sky was and so soft the air.”— William Morris

          "Resources mean less than resourcefulness" --puzzle book

          "Half of 8 is 3 if you pick the right half." -- math.answers.com

          To get involved or stay informed: HEvKA Project Page on FEvIR Platform, HEvKA Project Page on HL7 Confluence, or join any of the groups that are now meeting in the following weekly schedule:

          Joanne Dehnbostel

          unread,
          Feb 27, 2024, 12:29:21 PM2/27/24
          to Health Evidence Knowledge Accelerator (HEvKA)

           

          6 people (CE, HL, JD, KS, KW, MA) participated today in up to 4 active working group meetings.

          The Project Management Working Group discussed a strategy for clearing the remaining terminology related FHIR trackers. 

          The Setting the Scientific Record on FHIR Working Group reviewed progress on the GRADEpro to FHIR converter under development on the FEvIR Platform and added the footnotes from GRADEpro for certainty ratings to the converter and agreed that we need more GRADEpro examples for testing. 

          The CQL Development Working Group (a CDS EBMonFHIR sub-WG) extracted the inclusion and exclusion criteria from the next paper we want to develop as an example and started to convert to CQL examples.

          Inclusion criteria:

          1. 21 years of age or older    (AgeInYears() >= 21)
          2. and were scheduled to undergo endoscopic surgical treatment of a primary stone at the urology clinics of the participating large, urban, tertiary-care centers.
          3. The type of surgery for the primary stone — ureteroscopy or percutaneous nephrolithotomy

          https://www.findacode.com/snomed/175953004--endoscopic-laser-fragmentation-of-renal-calculus.html

          UK's NHSDigital SNOMED-CT browser results for percutaneous nephrolithotomy: https://termbrowser.nhs.uk/?perspective=full&conceptId1=386200002&edition=uk-edition&release=v20240214&server=https://termbrowser.nhs.uk/sct-browser-api/snomed&langRefset=999001261000000100,999000691000001104

          1. Patients who were able to provide informed consent
          2. and who had one or more secondary stones on computed tomography (CT) within 90 days before randomization were included.

           

          Exclusion criteria:

          1. Patients with known systemic disease
          2. Patients with  anatomical disorders such as medullary sponge kidney, primary hyperparathyroidism, renal tubular acidosis, sarcoidosis, and horseshoe kidney were excluded.

          The Statistic Terminology Working Group found that the two terms open for vote last week (calibration slope and measure of heterogeneity) did not have enough votes to be added to the code system , so they are still open for vote. An additional term, area under the curve, was newly defined so there are currently 3 terms open for vote as shown below. The group also discovered a bug in the Codesystem Builder/Viewer that did not allow viewing of some comments, this was fixed during the meeting.

          Term

          Definition

          Alternative Terms
          (if any)

          Comment for application
          (if any)

          A measure of calibration that is the rate of change in the appropriately transformed value per unit change of the correspondingly transformed predicted value.

          • calibration-in-the-small

          For calibration of binary outcome variables (0 or 1), the calibration slope is computed from a statistical model where the log odds of the predicted probabilities is a linear function of the empirical frequencies (logistical regression). The transformation is log odds (logit, the link function for a generalized linear model for the expected value of the outcome).

          For calibration of count outcome variables (0, 1, 2, 3, ...), the calibration slope may be computed from a statistical model where the log of the predicted mean counts is a linear function of the empirical frequencies determined by unique combinations of the covariates. The transformation is log, the link function, of the counts.

          There are other types of outcome variables for which the calibration slope may be obtained.

          Slopes further away from 1.0 indicate, at upper and lower values, over- or under-confidence in the prediction.

          A statistic that represents the variation or spread among values in the set of estimates across studies.

          • measure of statistical heterogeneity

          There are several types of heterogeneity (or diversity) which are important factors which determine whether or not evidence should be pooled. Clinical heterogeneity may refer to variations in the population, intervention, comparator, or outcome. Methodological heterogeneity may refer to variations in study design. Statistical heterogeneity is described here.

          A measure of dispersion is defined as a statistic that represents the variation or spread among data values in a dataset or data distribution. In the context of a meta-analysis, a measure of heterogeneity is a measure of dispersion in which the dataset is the set of estimates across studies.

          area under the curve

          A statistic that summarizes the variation of a quantity of interest across a domain interval of interest.

          • AUC

          As examples, in classification tasks, the quantity of interest is the true positive rate, and the domain interval of interest is the false positive rate (See ROC curve); in pharmacodynamic studies, the quantity of interest is the concentration of a drug and the domain interval of interest is time. The average quantity is the area under the curve divided by the range of the domain interval of interest. In intensive care, assessing lung barotrauma the quantity of interest is pressure and the domain interval of interest is time.

           

          Releases on the FEvIR Platform:

          The Fast Evidence Interoperability Resources (FEvIR) Platform is available for use now, but is “pre-release”.  The current version is 0.205.1 (February 26, 2024). Viewing resources is open without login.  Signing in is free and required to create content (which can then only be edited by the person who created the content).

          Release 0.205.1 (February 26, 2024) added CQL project to homepage

          FEvIR®: CodeSystem Builder/Viewer version 0.40.1 (February 26, 2024) creates and displays code system terms (concepts) in a CodeSystem Resource.

          Release 0.40.1 (February 26, 2024) fixed a bug for owners of a codesystem that prevented them from seeing the comments if a term had no votes.

          Quote for thought: 

          "There are known knowns. These are things we know that we know. There are known unknowns. That is to say,  there are things that we know we don't know."--Donald Rumsfeld

          Joanne Dehnbostel

          unread,
          Feb 28, 2024, 1:39:45 PM2/28/24
          to Health Evidence Knowledge Accelerator (HEvKA)

           

           

           

          4 people (HL, JD, KS, KW) participated today in up to 2 active working group meetings.

          The Rate of Scientific Knowledge Transfer Working Group did not meet today to allow participation in the GIN Workshop titled "Empowering global guideline adaptability: Exploring shortcomings of BMJ Rapid Recommendations and strategies for global guidelines that are easier to adapt". The abstract representing the Rate of Scientific Knowledge Transfer research project for Global Evidence Summit 2024 was submitted asynchronously. 

          The StatisticsOnFHIR Working Group (a CDS EBMonFHIR sub-WG) continued to review progress on a ComparativeEvidenceReport for a real world evidence example https://fevir.net/resources/Project/195718 , and how to represent the variables and evidence contained in a CONSORT diagram and in "Table 1" using profiles from the Evidence Based Medicine (EBMonFHIR) Implementation Guide.

          The Ontology Management Working Group worked on the abstract to represent the GRADE Ontology project at the Global Evidence Summit 2024 incorporating many comments and suggestions from the group.

          Releases on the FEvIR Platform:

          The FEvIR®: RAte of Dissemination Assessment Research (RADAR) Tool Version 0.9.0 (February 27, 2024) facilitates research conduct and project management for an investigation to measure the rate of scientific knowledge transfer.

          Release 0.9.0 (February 27, 2024) allows the admin of an investigation to remove investigators or admin users (other than the owner)

          Quote for Thought: "You have to go out on a limb to get the fruit" --puzzle book

          Joanne Dehnbostel

          unread,
          Feb 28, 2024, 11:23:44 PM2/28/24
          to Health Evidence Knowledge Accelerator (HEvKA)

           

           

           

           

          8 people (BK, HL, IK, JD, JO, KR, KS, MA) participated today in 3 active working group meetings.

          The Funding the Ecosystem Infrastructure Working Group revised the following statement (started two weeks ago) as a product definition for making guidelines computable and for making clinical trial protocols computable:

          Template : For (target customer) who (statement of the need or opportunity), the (product name) is a (product category) that (statement of key benefit – that is, compelling reason to buy)

          Changed from: For (guideline developer) who (needs to make guidelines easier to update, find, and use), the (Computable Publishing: Guideline Authoring Tool) is a (web-based user interface to enter information for automatic conversion to computable form) that (uses an interoperable standard). 

          To: For (guideline developer) who (needs to make guidelines easier to update, find, and use), the (Computable Publishing: Guideline Authoring Tool) is a (web-based user interface to enter information for automatic conversion to computable form) that (uses an interoperable standard). Making this information computable and interoperable enables protocol information to be shared seamlessly across users and systems without data re-entry.

          Changed from : For (a clinical trial investigator) who (needs to report the trial protocol in computable form), the (Computable Publishing: Research Protocol Authoring Tool) is a (web-based user interface to enter data for automatic conversion to computable form) that (enables creation of standard computable content without having to learn technical specifications, i.e. enables creating computable content without any software engineering personnel). Creating computable content for research protocols is desired to make updating protocols more efficient by limiting changes to only the concepts that need to change and coordinating the changes and packaging within linked content, make reporting protocols easier by providing the report in forms meeting technical specifications, and make shared protocols easier to use by providing the protocol content in a form that is interpretable by software tools for trial management.

          To: For (a clinical trial investigator) who (needs to make trial protocols easier to report, update, and share),  the (Computable Publishing: Research Protocol Authoring Tool) is a (web-based user interface to enter information for automatic conversion to computable form) that (uses an interoperable standard). Making this information computable and interoperable enables protocol information to be shared seamlessly across users and systems without data re-entry.

          The group then reviewed the Accelerating Care Transformation (ACT) draft charter for stakeholder input. 

          The Communications Working Group discussed daily updates and websites which deliver information about the HEvKA project. We then discussed possible submissions for the AMIA 2024 annual symposium, the submission deadline is March 18, 2024.

          The Board of Directors of the Scientific Knowledge Accelerator Foundation (SKAF) discussed clarifying the role of SKAF, improving the SKAF website, and creating an annual report. 

          Quote for thought: "The grass is greener where you water it." -- Wolfgang Puck

          Joanne Dehnbostel

          unread,
          Mar 1, 2024, 10:48:25 PM3/1/24
          to Health Evidence Knowledge Accelerator (HEvKA)

          4 people (CE, IK, JD, KS) participated today in 2 active working group meetings.

          The EBM Implementation Guide Working Group (a CDS EBMonFHIR sub-WG) discussed software needed to create content and maintain the EBMonFHIR Implementation Guide. 

          The Computable EBM Tools Development Working Group reviewed the GRADEpro to FEvIR converter software tool in progress and discussed the Global Evidence Summit 2024 conference which will take place in Prague in September. 

          Releases on the FEvIR Platform:

          The FEvIR®: RAte of Dissemination Assessment Research (RADAR) Tool Version 0.9.1 (February 29, 2024) facilitates research conduct and project management for an investigation to measure the rate of scientific knowledge transfer.

          Release 0.9.1 (February 29, 2024) simplified the text for some of the alert messages.

          Quote for thought "Sometimes all you need is a big leap of faith." --Sean Bean

          Joanne Dehnbostel

          unread,
          Mar 2, 2024, 4:50:57 PM3/2/24
          to Health Evidence Knowledge Accelerator (HEvKA)

           

           

          11 people (CA-D, HK, HL, JB, JD, KS, KW, PW, SL, SM, TD) participated today in up to 3 active working group meetings. 

          The Risk of Bias Terminology Working Group found that of the 6 terms open for vote last week only one (database search sources inadequate) received enough votes to be added to the code system. The remaining 5 are still open for vote as shown below. The group then defined one additional term (search strategy limits for study report characteristics not appropriate) which is now also open for vote. There are currently 6 risk of bias terms open for vote.

          Code and Voting Results From Last Week

          Term

          Definition

          Alternative Terms
          (if any)

          Comment for application
          (if any)

          SEVCO:00274

          Still Open

          A bias in study eligibility criteria due to a mismatch between the inclusion and exclusion criteria and the research question.

          The mismatch between the inclusion and exclusion criteria and the research question may relate to differences in the population, exposures, or outcomes studies from those of interest.

          SEVCO:00276

          Still Open

          A bias in study eligibility criteria that is specific to restrictions based on characteristics of the study design, conduct, or findings.

          Any restrictions applied on the basis of study characteristics should not introduce a bias in study eligibility criteria. Examples of such restrictions may include criteria based on study size, study design, study quality, or date when the study was conducted.

          In the ROBIS tool used for risk of bias assessment of systematic reviews, there is a question (1.4 Were all restrictions in eligibility criteria based on study characteristics appropriate?) that is different from the one which refers to whether the eligibility criteria are appropriate to the review question. Therefore, a separate term is available in SEVCO.

          SEVCO:00264

          Still Open

          non-database search sources inadequate

          A bias in search strategy in which the sources other than electronic database sources are not sufficient to find the studies available.

          The set of sources other than databases (electronic sources) expected to include the studies of interest will vary with the review topic.

          SEVCO:00265

          Still Open

          search strategy not sensitive

          A bias in search strategy in which the search terms and combinations of search terms are not sufficient to find the studies available.

          The search terms and combinations of search terms expected to include the studies of interest will vary with the review topic.

          SEVCO:00266

          Newly Open

          search strategy limits for study report characteristics not appropriate

          A bias resulting from search strategy criteria implemented due to practical considerations for implementation, including properties of a report or its accessibility.

          A bias in search strategy that is specific to restrictions on the date, status, structure, language, or accessibility of the study report.. Accessibility refers to where a resource is available and how one gains access to it.

          SEVCO:00360

          Still Open

          incoherence among qualitative data, analysis, and interpretation

          A qualitative research bias in which there is any mismatch among hypothesis, data collected, data analysis, and results interpretation.

          The term mismatch applies to an inappropriate or wrong or inadequate relationship.

          SEVCO:00263

          Passed

          database search sources inadequate

          A bias in search strategy in which the electronic sources are not sufficient to find the studies available in electronic sources.

          The set of databases (electronic sources) expected to include the studies of interest will vary with the review topic.

           

          The Grade Ontology Working Group discussed the overlap between the SEVCO Risk of Bias Code System and the GRADE Ontology.

          An abstract describing the project was submitted yesterday for Global Evidence Summit 2024.

          The term that was open for vote, Indirectness, received 4 affirmative votes and 6 opposing votes with comments.

          The definition was revised as shown below and reopened for vote.  The meeting was recorded and can be viewed here.  

          Term

          Definition

          Alternative Terms
          (if any)

          Comment for application
          (if any)

          Indirectness

          Anticipated differences between the results observed in the included studies and the results that would be observed in the target context of interest, because of differences between the characteristics of the question and the characteristics of the evidence.

          The question of interest may vary with context, such as the key considerations for a guideline or systematic review. Indirectness is one of the domains that can impact the rating of the certainty of evidence. Related terms used by others for "directness" include "relevance", "external validity", and "applicability". Directness is the extent to which the results of an experimental or observational study apply to a target context outside of that study. Any differences may be considered indirectness. The degree of indirectness may range from being almost non-existent to large. Indirectness may be important or unimportant. The degree and importance of indirectness influences the rating of certainty of evidence, but not the definition of indirectness itself. In a network meta-analysis, an indirect effect estimate describes an A vs. C comparison that is derived from A vs. B studies and B vs. C studies. This use of the term "indirect" is not the intended use of Indirectness. Differences might include feasibility, populations, exposures, interventions, comparators, or outcomes.

           

          The Project Management Working Group prepared the suggested agenda for next week:

          Monday 8-9 am

          Project Management

          FHIR changes and EBMonFHIR Implementation Guide issues

          Monday 9-10 am

          Setting the Scientific Record on FHIR

          Development of GRADEpro-to-FEvIR Converter

          Monday 10-11 am

          CQL Development (a CDS EBMonFHIR sub-WG)

          Develop CQL for reference codes using CQL learning materials

          Monday 2-3 pm

          Statistic Terminology

          SEVCO terms for measures of calibration (3 terms open for vote, Calibration Slope, measure of heterogeneity, area under the curve)

          Tuesday 9 am-10 am

          Measuring the Rate of Scientific Knowledge Transfer

          Review software improvements, discuss future research

          Tuesday 2-3 pm

          StatisticsOnFHIR (a CDS EBMonFHIR sub-WG)

          Review progress on ComparativeEvidenceReport example

          Tuesday 3-4 pm

          Ontology Management

          Review SEVCO Wordpress website and make improvements

          Wednesday 8-9 am

          Funding the Ecosystem Infrastructure

          Review product definition for making guidelines computable

          Wednesday 9-10 am

          Communications (Awareness, Scholarly Publications)

          Publications -Study Design Terminology Paper-adjustments for submission to IJE journal, Presentations - AMIA, Website

          Thursday 8-9 am

          EBM Implementation Guide (a CDS EBMonFHIR sub-WG)

          Download software for IG development

          Thursday 9-10 am

          Computable EBM Tools Development

          Review GRADEpro-to-FEvIR Converter progress

          Friday 9-10 am

          Risk of Bias Terminology

          Review SEVCO terms for qualitative research bias and study selection bias (6 terms open for vote)

          Friday 10-11 am

          GRADE Ontology

          Term development (Indirectness - 1 term open for vote)

          Friday 12-1 pm

          Project Management

          Prepare Weekly Agenda

           

           

          Quote for Thought: "To truly find yourself, you should play hide and seek alone" –  Fortune Cookie

          Joanne Dehnbostel

          unread,
          Mar 4, 2024, 1:58:26 PM3/4/24
          to Health Evidence Knowledge Accelerator (HEvKA), Health Evidence Knowledge Accelerator (HEvKA) Weekly Update

          Project Coordination Updates:

          1. people (BA, BK, CA-D, CE , HK, HL, IK, JB, JD, JO, KR, KS, KW, MA, PW, SL, SM, TD) from 10 countries (Belgium, Canada, Chile/Spain, China, Finland, India, Norway, Peru, UK, USA) participated this week (February 26- March 1, 2024) in up to 14 active working group meetings.

          27 people (BA, BK, CA-D, CE , CM, GL, HK, HL, IK, JB, JD, JJ, JO, KR, KS, KW, LL, MA, MH, PW, RC, RL, SL, SM, SS, TD, XS) from 13 countries (Belgium, Brazil, Canada, Chile/Spain, China, Finland, Germany, India, Norway, Peru, Taiwan, UK, USA) participated this month (February 2024) in up to 56 active working group meetings.

           

          On February 26, the Project Management Working Group discussed a strategy for clearing the remaining terminology related FHIR trackers. 

          On February 26, the Setting the Scientific Record on FHIR Working Group reviewed progress on the GRADEpro to FHIR converter under development on the FEvIR Platform and added the footnotes from GRADEpro for certainty ratings to the converter and agreed that we need more GRADEpro examples for testing. 

          On February 28, the Communications Working Group discussed daily updates and websites which deliver information about the HEvKA project. We then discussed possible submissions for the AMIA 2024 annual symposium, the submission deadline is March 18, 2024.

          On March 1, the Project Management Working Group prepared the suggested weekly agenda for the week of March 4-8, 2024:

          Day/Time (Eastern)

          Working Group

          Agenda Items

          Monday 8-9 am

          Project Management

          FHIR changes and EBMonFHIR Implementation Guide issues

          Monday 9-10 am

          Setting the Scientific Record on FHIR

          Development of GRADEpro-to-FEvIR Converter

          Monday 10-11 am

          CQL Development (a CDS EBMonFHIR sub-WG)

          Develop CQL for reference codes using CQL learning materials

          Monday 2-3 pm

          Statistic Terminology

          SEVCO terms for measures of calibration (3 terms open for vote, Calibration Slope, measure of heterogeneity, area under the curve)

          Tuesday 9 am-10 am

          Measuring the Rate of Scientific Knowledge Transfer

          Review software improvements, discuss future research

          Tuesday 2-3 pm

          StatisticsOnFHIR (a CDS EBMonFHIR sub-WG)

          Review progress on ComparativeEvidenceReport example

          Tuesday 3-4 pm

          Ontology Management

          Review SEVCO Wordpress website and make improvements

          Wednesday 8-9 am

          Funding the Ecosystem Infrastructure

          Review product definition for making guidelines computable

          Wednesday 9-10 am

          Communications (Awareness, Scholarly Publications)

          Publications -Study Design Terminology Paper-adjustments for submission to IJE journal, Presentations - AMIA, Website

          Thursday 8-9 am

          EBM Implementation Guide (a CDS EBMonFHIR sub-WG)

          Download software for IG development

          Thursday 9-10 am

          Computable EBM Tools Development

          Review GRADEpro-to-FEvIR Converter progress

          Friday 9-10 am

          Risk of Bias Terminology

          Review SEVCO terms for qualitative research bias and study selection bias (6 terms open for vote)

          Friday 10-11 am

          GRADE Ontology

          Term development (Indirectness - 1 term open for vote)

          Friday 12-1 pm

          Project Management

          Prepare Weekly Agenda

           

          SEVCO Updates

          On February 26, the Statistic Terminology Working Group found that the two terms open for vote last week (calibration slope and measure of heterogeneity) did not have enough votes to be added to the code system , so they are still open for vote. An additional term, area under the curve, was newly defined so there are currently 3 terms open for vote as shown below. The group also discovered a bug in the Codesystem Builder/Viewer that did not allow viewing of some comments, this was fixed during the meeting.

          Term

          Definition

          Alternative Terms
          (if any)

          Comment for application
          (if any)

          calibration slope

          A measure of calibration that is the rate of change in the appropriately transformed value per unit change of the correspondingly transformed predicted value.

          • calibration-in-the-small

          For calibration of binary outcome variables (0 or 1), the calibration slope is computed from a statistical model where the log odds of the predicted probabilities is a linear function of the empirical frequencies (logistical regression). The transformation is log odds (logit, the link function for a generalized linear model for the expected value of the outcome).

          For calibration of count outcome variables (0, 1, 2, 3, ...), the calibration slope may be computed from a statistical model where the log of the predicted mean counts is a linear function of the empirical frequencies determined by unique combinations of the covariates. The transformation is log, the link function, of the counts.

          There are other types of outcome variables for which the calibration slope may be obtained.

          Slopes further away from 1.0 indicate, at upper and lower values, over- or under-confidence in the prediction.

          measure of heterogeneity

          A statistic that represents the variation or spread among values in the set of estimates across studies.

          • measure of statistical heterogeneity

          There are several types of heterogeneity (or diversity) which are important factors which determine whether or not evidence should be pooled. Clinical heterogeneity may refer to variations in the population, intervention, comparator, or outcome. Methodological heterogeneity may refer to variations in study design. Statistical heterogeneity is described here.

          A measure of dispersion is defined as a statistic that represents the variation or spread among data values in a dataset or data distribution. In the context of a meta-analysis, a measure of heterogeneity is a measure of dispersion in which the dataset is the set of estimates across studies.

          A statistic that summarizes the variation of a quantity of interest across a domain interval of interest.

          • AUC

          As examples, in classification tasks, the quantity of interest is the true positive rate, and the domain interval of interest is the false positive rate (See ROC curve); in pharmacodynamic studies, the quantity of interest is the concentration of a drug and the domain interval of interest is time. The average quantity is the area under the curve divided by the range of the domain interval of interest. In intensive care, assessing lung barotrauma the quantity of interest is pressure and the domain interval of interest is time.

           

          On March 1, the Risk of Bias Terminology Working Group found that of the 6 terms open for vote last week only one (database search sources inadequate) received enough votes to be added to the code system. The remaining 5 are still open for vote as shown below. The group then defined one additional term (search strategy limits for study report characteristics not appropriate) which is now also open for vote. There are currently 6 risk of bias terms open for vote.

          Code and Voting Results From Last Week

          Term

          Definition

          Alternative Terms
          (if any)

          Comment for application
          (if any)

          SEVCO:00274

          Still Open

          study eligibility criteria not appropriate for review question

          A bias in study eligibility criteria due to a mismatch between the inclusion and exclusion criteria and the research question.

          The mismatch between the inclusion and exclusion criteria and the research question may relate to differences in the population, exposures, or outcomes studies from those of interest.

          A bias in study eligibility criteria that is specific to restrictions based on characteristics of the study design, conduct, or findings.

          Any restrictions applied on the basis of study characteristics should not introduce a bias in study eligibility criteria. Examples of such restrictions may include criteria based on study size, study design, study quality, or date when the study was conducted.

          In the ROBIS tool used for risk of bias assessment of systematic reviews, there is a question (1.4 Were all restrictions in eligibility criteria based on study characteristics appropriate?) that is different from the one which refers to whether the eligibility criteria are appropriate to the review question. Therefore, a separate term is available in SEVCO.

          SEVCO:00264

          Still Open

          non-database search sources inadequate

          A bias in search strategy in which the sources other than electronic database sources are not sufficient to find the studies available.

          The set of sources other than databases (electronic sources) expected to include the studies of interest will vary with the review topic.

          SEVCO:00265

          Still Open

          search strategy not sensitive

          A bias in search strategy in which the search terms and combinations of search terms are not sufficient to find the studies available.

          The search terms and combinations of search terms expected to include the studies of interest will vary with the review topic.

          SEVCO:00266

          Newly Open

          search strategy limits for study report characteristics not appropriate

          A bias resulting from search strategy criteria implemented due to practical considerations for implementation, including properties of a report or its accessibility.

          A bias in search strategy that is specific to restrictions on the date, status, structure, language, or accessibility of the study report.. Accessibility refers to where a resource is available and how one gains access to it.

          SEVCO:00360

          Still Open

          incoherence among qualitative data, analysis, and interpretation

          A qualitative research bias in which there is any mismatch among hypothesis, data collected, data analysis, and results interpretation.

          The term mismatch applies to an inappropriate or wrong or inadequate relationship.

          SEVCO:00263

          Passed

          database search sources inadequate

          A bias in search strategy in which the electronic sources are not sufficient to find the studies available in electronic sources.

          The set of databases (electronic sources) expected to include the studies of interest will vary with the review topic.

           

          FEvIR Platform and Tools Development Updates:

          On February 26, the CQL Development Working Group (a CDS EBMonFHIR sub-WG) extracted the inclusion and exclusion criteria from the next paper we want to develop as an example and started to convert to CQL examples.

          Inclusion criteria:

          1. 21 years of age or older    (AgeInYears() >= 21)
          2. and were scheduled to undergo endoscopic surgical treatment of a primary stone at the urology clinics of the participating large, urban, tertiary-care centers.
          3. The type of surgery for the primary stone — ureteroscopy or percutaneous nephrolithotomy

          https://www.findacode.com/snomed/175953004--endoscopic-laser-fragmentation-of-renal-calculus.html

          UK's NHSDigital SNOMED-CT browser results for percutaneous nephrolithotomy: https://termbrowser.nhs.uk/?perspective=full&conceptId1=386200002&edition=uk-edition&release=v20240214&server=https://termbrowser.nhs.uk/sct-browser-api/snomed&langRefset=999001261000000100,999000691000001104

          4. Patients who were able to provide informed consent

          5. and who had one or more secondary stones on computed tomography (CT) within 90 days before randomization were included.

           

          Exclusion criteria:

          1. Patients with known systemic disease

                2. Patients with  anatomical disorders such as medullary sponge kidney, primary hyperparathyroidism, renal tubular acidosis, sarcoidosis, and horseshoe kidney were excluded.

           

          On February 29, the Computable EBM Tools Development Working Group reviewed the GRADEpro to FEvIR converter software tool in progress and discussed the Global Evidence Summit 2024 conference which will take place in Prague in September. 

           

          HL7 Standards Development Updates:

          On February 27, the StatisticsOnFHIR Working Group (a CDS EBMonFHIR sub-WG) continued to review progress on a ComparativeEvidenceReport for a real world evidence example https://fevir.net/resources/Project/195718 , and how to represent the variables and evidence contained in a CONSORT diagram and in "Table 1" using profiles from the Evidence Based Medicine (EBMonFHIR) Implementation Guide.

          On February 29, the EBM Implementation Guide Working Group (a CDS EBMonFHIR sub-WG) discussed software needed to create content and maintain the EBMonFHIR Implementation Guide. 

           

           

          Knowledge Ecosystem Liaison/Coordination Updates:

          On February 27, the Ontology Management Working Group worked on the abstract to represent the GRADE Ontology project at the Global Evidence Summit 2024 incorporating many comments and suggestions from the group.

          On February 28, the Funding the Ecosystem Infrastructure Working Group revised the following statement (started two weeks ago) as a product definition for making guidelines computable and for making clinical trial protocols computable:

          Template : For (target customer) who (statement of the need or opportunity), the (product name) is a (product category) that (statement of key benefit – that is, compelling reason to buy)

          Changed from: For (guideline developer) who (needs to make guidelines easier to update, find, and use), the (Computable Publishing: Guideline Authoring Tool) is a (web-based user interface to enter information for automatic conversion to computable form) that (uses an interoperable standard). 

          To: For (guideline developer) who (needs to make guidelines easier to update, find, and use), the (Computable Publishing: Guideline Authoring Tool) is a (web-based user interface to enter information for automatic conversion to computable form) that (uses an interoperable standard). Making this information computable and interoperable enables guideline information to be shared seamlessly across users and systems without data re-entry.

          Changed from : For (a clinical trial investigator) who (needs to report the trial protocol in computable form), the (Computable Publishing: Research Protocol Authoring Tool) is a (web-based user interface to enter data for automatic conversion to computable form) that (enables creation of standard computable content without having to learn technical specifications, i.e. enables creating computable content without any software engineering personnel). Creating computable content for research protocols is desired to make updating protocols more efficient by limiting changes to only the concepts that need to change and coordinating the changes and packaging within linked content, make reporting protocols easier by providing the report in forms meeting technical specifications, and make shared protocols easier to use by providing the protocol content in a form that is interpretable by software tools for trial management.

          To: For (a clinical trial investigator) who (needs to make trial protocols easier to report, update, and share),  the (Computable Publishing: Research Protocol Authoring Tool) is a (web-based user interface to enter information for automatic conversion to computable form) that (uses an interoperable standard). Making this information computable and interoperable enables protocol information to be shared seamlessly across users and systems without data re-entry.

          The group then reviewed the Accelerating Care Transformation (ACT) draft charter for stakeholder input. 

           

          On March 1, the Grade Ontology Working Group discussed the overlap between the SEVCO Risk of Bias Code System and the GRADE Ontology. 

          The term that was open for vote, Indirectness, received 4 affirmative votes and 6 opposing votes with comments.

          The definition was revised as shown below and reopened for vote.  The meeting was recorded and can be viewed here.  

          Term

          Definition

          Alternative Terms
          (if any)

          Comment for application
          (if any)

          Indirectness

          Anticipated differences between the results observed in the included studies and the results that would be observed in the target context of interest, because of differences between the characteristics of the question and the characteristics of the evidence.

          The question of interest may vary with context, such as the key considerations for a guideline or systematic review. Indirectness is one of the domains that can impact the rating of the certainty of evidence. Related terms used by others for "directness" include "relevance", "external validity", and "applicability". Directness is the extent to which the results of an experimental or observational study apply to a target context outside of that study. Any differences may be considered indirectness. The degree of indirectness may range from being almost non-existent to large. Indirectness may be important or unimportant. The degree and importance of indirectness influences the rating of certainty of evidence, but not the definition of indirectness itself. In a network meta-analysis, an indirect effect estimate describes an A vs. C comparison that is derived from A vs. B studies and B vs. C studies. This use of the term "indirect" is not the intended use of Indirectness. Differences might include feasibility, populations, exposures, interventions, comparators, or outcomes.

          Research Development Updates:

          The Rate of Scientific Knowledge Transfer Working Group did not meet on February 27, to allow participation in the GIN Workshop titled "Empowering global guideline adaptability: Exploring shortcomings of BMJ Rapid Recommendations and strategies for global guidelines that are easier to adapt". The abstract representing the Rate of Scientific Knowledge Transfer research project for Global Evidence Summit 2024 was submitted asynchronously. 

          On February 28, the Board of Directors of the Scientific Knowledge Accelerator Foundation (SKAF) discussed clarifying the role of SKAF, improving the SKAF website, and creating an annual report. 

           

          Releases on the FEvIR Platform:

          The Fast Evidence Interoperability Resources (FEvIR) Platform is available for use now, but is “pre-release”.  The current version is 0.205.1 (February 26, 2024). Viewing resources is open without login.  Signing in is free and required to create content (which can then only be edited by the person who created the content).

          Release 0.205.1 (February 26, 2024) added CQL project to homepage

          FEvIR®: CodeSystem Builder/Viewer version 0.40.1 (February 26, 2024) creates and displays code system terms (concepts) in a CodeSystem Resource.

          Release 0.40.1 (February 26, 2024) fixed a bug for owners of a codesystem that prevented them from seeing the comments if a term had no votes.

          The FEvIR®: RAte of Dissemination Assessment Research (RADAR) Tool Version 0.9.1 (February 29, 2024) facilitates research conduct and project management for an investigation to measure the rate of scientific knowledge transfer.

          Release 0.9.0 (February 27, 2024) allows the admin of an investigation to remove investigators or admin users (other than the owner)

          Release 0.9.1 (February 29, 2024) simplified the text for some of the alert messages.

           

          Quotes for Thought: 

          "There are known knowns. These are things we know that we know. There are known unknowns. That is to say,  there are things that we know we don't know."--Donald Rumsfeld

          "You have to go out on a limb to get the fruit" --puzzle book

          "The grass is greener where you water it." -- Wolfgang Puck

          "Sometimes all you need is a big leap of faith." --Sean Bean

          "To truly find yourself, you should play hide and seek alone" –  Fortune Cookie

          To get involved or stay informed: HEvKA Project Page on FEvIR Platform, HEvKA Project Page on HL7 Confluence, or join any of the groups that are now meeting in the following weekly schedule:

          Joanne Dehnbostel

          unread,
          Mar 5, 2024, 10:46:10 AM3/5/24
          to Health Evidence Knowledge Accelerator (HEvKA)

           

           

          7 people (BA, CE , JD, KS, KW, MA, RL) participated today in up to 4 active working group meetings. 

          The Project Management Working Group wrote the proposal for  HL7 Connectathon 36 EBMonFHIR Track and worked on FHIR tracker items. 

          The Setting the Scientific Record on FHIR Working Group was joined by a representative of SRDR and discussed the practical differences between FHIR versions 5 and 6. A Jira request for more R6 examples for the Group Assignment Profile will be submitted. 

          The CQL Development Working Group (HL7 CDS EBMonFHIR sub-WG) continued to create CQL expressions to describe inclusion and exclusion criteria from a real-world example. 

          The Statistic Terminology Working Group found that the three terms open for vote last week did not receive enough votes to be approved for the code system. These terms are still open for vote. The group then defined one additional term (area under the ROC curve). There are currently 4 terms open for vote as shown below.

          Term

          Definition

          Alternative Terms
          (if any)

          Comment for application
          (if any)

          A measure of calibration that is the rate of change in the appropriately transformed value per unit change of the correspondingly transformed predicted value.

          • calibration-in-the-small

          For calibration of binary outcome variables (0 or 1), the calibration slope is computed from a statistical model where the log odds of the predicted probabilities is a linear function of the empirical frequencies (logistical regression). The transformation is log odds (logit, the link function for a generalized linear model for the expected value of the outcome).

          For calibration of count outcome variables (0, 1, 2, 3, ...), the calibration slope may be computed from a statistical model where the log of the predicted mean counts is a linear function of the empirical frequencies determined by unique combinations of the covariates. The transformation is log, the link function, of the counts.

          There are other types of outcome variables for which the calibration slope may be obtained.

          Slopes further away from 1.0 indicate, at upper and lower values, over- or under-confidence in the prediction.

          A statistic that represents the variation or spread among values in the set of estimates across studies.

          • measure of statistical heterogeneity

          There are several types of heterogeneity (or diversity) which are important factors which determine whether or not evidence should be pooled. Clinical heterogeneity may refer to variations in the population, intervention, comparator, or outcome. Methodological heterogeneity may refer to variations in study design. Statistical heterogeneity is described here.

          A measure of dispersion is defined as a statistic that represents the variation or spread among data values in a dataset or data distribution. In the context of a meta-analysis, a measure of heterogeneity is a measure of dispersion in which the dataset is the set of estimates across studies.

          area under the curve

          A statistic that summarizes the variation of a quantity of interest across a domain interval of interest.

          • AUC

          As examples, in classification tasks, the quantity of interest is the true positive rate, and the domain interval of interest is the false positive rate (See ROC curve); in pharmacodynamic studies, the quantity of interest is the concentration of a drug and the domain interval of interest is time. The average quantity is the area under the curve divided by the range of the domain interval of interest. In intensive care, assessing lung barotrauma the quantity of interest is pressure and the domain interval of interest is time.

          area under the ROC curve

          An area under the curve where the curve is the true positive rate and the range of interest is the false positive rate.

          • AUC
          • AUROC
          • area under the receiver operating characteristic curve

          The area under the ROC curve is used to assess the performance of a classifier used to distinguish between two or more groups. ROC stands for Receiver Operating Characteristic. Another term for true positive rate is sensitivity and another term for false positive rate is 1-sensitivity.

           

          Quote for thought:  "I don't say she's lyin', Mr. Gilmer, I say she's mistaken in her mind" – Tom Robinson, To Kill a Mockingbird

          Joanne Dehnbostel

          unread,
          Mar 6, 2024, 10:04:42 AM3/6/24
          to Health Evidence Knowledge Accelerator (HEvKA)

           

          6 people ( HL, JD, KR, KS, KW, RC) participated today in up to 3 active working group meetings.

          The Measuring the Rate of Scientific Knowledge Transfer Working Group reviewed improvements in the software created for the project and suggested that the administrative user interface should be subordinate to the investigator user interface when looking at the administrator screen. We also decided to repeat the pilot with 3 articles as a trial run for the software updates. 

          The StatisticsOnFHIR WG (a CDS EBMonFHIR sub-WG) continued to work on a real-world example and developed recommendations for a pragmatic user interface that would utilize the concept of a shell table, offering this to the user after the protocol has been entered. 

          The Ontology Working Group started the process of improving the Scientific Knowledge Accelerator Foundation's SEVCO website. We will continue this work next week. 

          Quote for thought: “The road to success is always under construction" – Lily Tomlin

          Releases on the FEvIR Platform:

          Computable Publishing®: Comparative Evidence Report Authoring Tool version 0.16.0 (March 5, 2024) creates and displays a Composition Resource with a ComparativeEvidenceReport Profile.

          Release 0.16.0 (March 5, 2024) now says the resource type for the "Resource Reference" labels.

          Joanne Dehnbostel

          unread,
          Mar 7, 2024, 7:28:16 AM3/7/24
          to Health Evidence Knowledge Accelerator (HEvKA)

           

          7 people (BA, CE , JD, JO, KR, KS, MA) participated today in up to 2 active working group meetings. 

          The Funding the Ecosystem Infrastructure Working Group identified the opportunity to connect the dots between target focused initiatives in follow up to ACTS, which are making evidence and guidance more computable, especially care gap reports , evidence ecosystem efforts, and work done by AHRQ, putting evidence into practice in measurable ways. Topics already identified, including chronic kidney disease, sickle cell anemia, pain management, and high blood pressure would be ideal subjects. This is important work that could be funded. How can we pursue this idea in future meetings? AHRQ is a possible funding source. KDIGO or other guideline organizations might be interested in measuring the uptake of their guidelines. 

          We also discussed an upcoming FDA workshop- https://www.fda.gov/drugs/news-events-human-drugs/fiscal-year-2024-generic-drug-science-and-research-initiatives-public-workshop-05202024

          The Communications Working Group discussed strategies for publication of the SEVCO Study Design terminology including bundling it with the subset of Risk of Bias Terminology intended for primary articles which might be ready soon for a SEVCO version 2 release. Ideas for improvement of the introduction of the current manuscript to interest the epidemiology audience include explaining why this terminology is important for classification, making it possible to search for papers by study design, and for use with methodology checklists like CONSORT and STROBE. The EBMonFHIR Implementation guide could also include a Composition Resource profile for checklists. 

          Submissions are due soon for the November AMIA conference. We discussed including a special session which would review Making Guidelines Computable: Reflections from the Global Evidence Summit 2024.

          Quote for thought: "I am extraordinarily patient, provided I get my own way in the end." — Margaret Thatcher

          Joanne Dehnbostel

          unread,
          Mar 8, 2024, 7:36:20 AM3/8/24
          to Health Evidence Knowledge Accelerator (HEvKA)

           

           

          7 people (BA, CE , GL, IK, JD, JW, KS) participated today in up to 2 active working group meetings. 

          The EBM Implementation Guide Working Group (a CDS EBMonFHIR sub-WG) downloaded additional software for IG development and discussed feedback from the IG ballot. We then created requesting 

          GroupAssignment and ParticipantFlowMeasure examples for EvidenceVariable. 

          The Computable EBM Tools Development Working Group briefly discussed progress on the  GRADEpro-to-FEvIR Converter software and then discussed participation in upcoming meetings including the EBMonFHIR Track at the HL7 Connectathon in Dallas, TX in May and HL7 FHIR DevDays in Minneapolis, MN June 10-13.

          Quote for thought: “The question isn’t who is going to let me; it’s who is going to stop me.” —Ayn Rand

          Joanne Dehnbostel

          unread,
          Mar 9, 2024, 2:26:46 PM3/9/24
          to Health Evidence Knowledge Accelerator (HEvKA)

           

          Reminder: Most of the United States (with the exception of Hawaii and most of Arizona)  will “Spring Forward” into daylight savings time this weekend, this means that the HEvKA meetings listed below in Eastern time will occur one hour earlier than last week.

           

          10 people (BA, HK, JB, JD, KS, KW, MA, SL, SM, TD) participated today in up to 3 active working group meetings.

           

          The Risk of Bias Terminology Working Group found that of the 6 terms open for vote last week, 5 terms received enough votes to be added to the code system. The 1 remaining term (search strategy limits for study report characteristics not appropriate) is still open for vote as shown below. The group then defined one additional term (study eligibility criteria not adhered to) which is now also open for vote. There are currently 2 risk of bias terms open for vote.

           

           

          Code and Voting Results From Last Week

          Term

          Definition

          Alternative Terms
          (if any)

          Comment for application
          (if any)

          SEVCO:00274

          Passed

          A bias in study eligibility criteria due to a mismatch between the inclusion and exclusion criteria and the research question.

          The mismatch between the inclusion and exclusion criteria and the research question may relate to differences in the population, exposures, or outcomes studies from those of interest.

          SEVCO:00276

          Passed

          A bias in study eligibility criteria that is specific to restrictions based on characteristics of the study design, conduct, or findings.

          Any restrictions applied on the basis of study characteristics should not introduce a bias in study eligibility criteria. Examples of such restrictions may include criteria based on study size, study design, study quality, or date when the study was conducted.

          In the ROBIS tool used for risk of bias assessment of systematic reviews, there is a question (1.4 Were all restrictions in eligibility criteria based on study characteristics appropriate?) that is different from the one which refers to whether the eligibility criteria are appropriate to the review question. Therefore, a separate term is available in SEVCO.

          SEVCO:00264

          Passed

          non-database search sources inadequate

          A bias in search strategy in which the sources other than electronic database sources are not sufficient to find the studies available.

          The set of sources other than databases (electronic sources) expected to include the studies of interest will vary with the review topic.

          SEVCO:00265

          Passed

          search strategy not sensitive

          A bias in search strategy in which the search terms and combinations of search terms are not sufficient to find the studies available.

          The search terms and combinations of search terms expected to include the studies of interest will vary with the review topic.

          SEVCO:00266

          Still Open

          search strategy limits for study report characteristics not appropriate

          A bias resulting from search strategy criteria implemented due to practical considerations for implementation, including properties of a report or its accessibility.

          A bias in search strategy that is specific to restrictions on the date, status, structure, language, or accessibility of the study report.. Accessibility refers to where a resource is available and how one gains access to it.

          A qualitative research bias in which there is any mismatch among hypothesis, data collected, data analysis, and results interpretation.

          The term mismatch applies to an inappropriate or wrong or inadequate relationship.

          SEVCO:00267

          Newly Open

          study eligibility criteria not adhered to

          A bias in search strategy due to incorrect implementation of the study inclusion and exclusion criteria.

          .

           

          The GRADE Ontology working group found that the 1 term open for vote, inconsistency, received 14 votes, 9 affirmative and 5 against. After responding to comments and redrafting the definition and comment for application the term is open for voting once again. The GRADE Ontology meeting was recorded and can be viewed here. If you voted previously for this term, please vote again as the definition has changed. 

          Term

          Definition

          Alternative Terms
          (if any)

          Comment for application
          (if any)

          Indirectness

          The lack of similarities between the characteristics of the evidence and the characteristics of the question of interest.

          The question of interest may vary with context, such as the key considerations for a guideline or systematic review. Differences might include feasibility, populations, exposures, interventions, comparators, or outcomes. Indirectness is one of the domains that can impact the rating of the certainty of evidence. Related terms used by others for "directness" include "relevance", "external validity", and "applicability". Directness is the extent to which the results of an experimental or observational study apply to a target context outside of that study. Any differences may be considered indirectness. The degree of indirectness may range from trivial to important for the evidence implementation. The degree and importance of indirectness influences the rating of certainty of evidence, but not the definition of indirectness itself. In a network meta-analysis, an indirect effect estimate describes an A vs. C comparison that is derived from A vs. B studies and B vs. C studies. This use of the term "indirect" is not the intended use of Indirectness.

           

           

          The Project Management Working Group prepared the proposed agenda for March 11-15:

           

          Day/Time (Eastern)

          Working Group

          Agenda Items

          Monday 8-9 am

          Project Management

          FHIR changes and EBMonFHIR Implementation Guide issues

          Monday 9-10 am

          Setting the Scientific Record on FHIR

          Development of GRADEpro-to-FEvIR Converter

          Monday 10-11 am

          CQL Development (a CDS EBMonFHIR sub-WG)

          Develop CQL for reference codes using CQL learning materials

          Monday 2-3 pm

          Statistic Terminology

          Review SEVCO terms (4 terms open for vote)

          Tuesday 9 am-10 am

          Measuring the Rate of Scientific Knowledge Transfer

          Review software improvements, discuss results of pilot

          Tuesday 2-3 pm

          StatisticsOnFHIR (a CDS EBMonFHIR sub-WG)

          Review progress on ComparativeEvidenceReport example

          Tuesday 3-4 pm

          Ontology Management

          Review SEVCO Wordpress website, Discuss SEVCO version 2 and implications to study design paper

          Wednesday 8-9 am

          Funding the Ecosystem Infrastructure

          Review product definition for making guidelines computable

          Wednesday 9-10 am

          Communications (Awareness, Scholarly Publications)

          Publications -Study Design Terminology Paper-adjustments for submission to IJE journal, Presentations - AMIA, GES2 Special Session

          Thursday 8-9 am

          EBM Implementation Guide (a CDS EBMonFHIR sub-WG)

          IG development and ballot issues

          Thursday 9-10 am

          Computable EBM Tools Development

          Review GRADEpro-to-FEvIR Converter progress

          Friday 9-10 am

          Risk of Bias Terminology

          Review SEVCO terms (2 terms open for vote)

          Friday 10-11 am

          GRADE Ontology

          Term development (Indirectness - 1 term open for vote)

          Friday 12-1 pm

          Project Management

          Prepare Weekly Agenda

           

           

          Quote for thought: "A Dream is a wish the heart makes" – from the Walt Disney film, "Cinderella"

          Joanne Dehnbostel

          unread,
          Mar 11, 2024, 10:03:00 AM3/11/24
          to Health Evidence Knowledge Accelerator (HEvKA), Health Evidence Knowledge Accelerator (HEvKA) Weekly Update

           

           

          Project Coordination Updates:

          Reminder: Most of the United States (with the exception of Hawaii and most of Arizona) began daylight savings time this weekend, this means that the HEvKA meetings listed below in Eastern time will occur one hour earlier than last week.

          19 people (BA, CE , GL, HK, HL, IK, JB, JD, JO, JW, KR, KS, KW, MA, RC, RL, SL, SM, TD) from 10 countries (Belgium, Canada, Chile/Spain, China, Finland, Germany, Norway, Taiwan, UK, USA) participated this week in up to 14 active working group meetings.

           

          On March 4, the Project Management Working Group wrote the proposal for  HL7 Connectathon 36 EBMonFHIR Track and worked on FHIR tracker items. 

          On March 4, the Setting the Scientific Record on FHIR Working Group was joined by a representative of SRDR and discussed the practical differences between FHIR versions 5 and 6. A Jira request for more R6 examples for the Group Assignment Profile will be submitted. 

          On March 8, the Project Management Working Group prepared the proposed agenda for March 11-15:

          Day/Time (Eastern)

          Working Group

          Agenda Items

          Monday 8-9 am

          Project Management

          FHIR changes and EBMonFHIR Implementation Guide issues

          Monday 9-10 am

          Setting the Scientific Record on FHIR

          Development of GRADEpro-to-FEvIR Converter

          Monday 10-11 am

          CQL Development (a CDS EBMonFHIR sub-WG)

          Develop CQL for reference codes using CQL learning materials

          Monday 2-3 pm

          Statistic Terminology

          Review SEVCO terms (4 terms open for vote)

          Tuesday 9 am-10 am

          Measuring the Rate of Scientific Knowledge Transfer

          Review software improvements, discuss results of pilot

          Tuesday 2-3 pm

          StatisticsOnFHIR (a CDS EBMonFHIR sub-WG)

          Review progress on ComparativeEvidenceReport example

          Tuesday 3-4 pm

          Ontology Management

          Review SEVCO Wordpress website, Discuss SEVCO version 2 and implications to study design paper

          Wednesday 8-9 am

          Funding the Ecosystem Infrastructure

          Review product definition for making guidelines computable

          Wednesday 9-10 am

          Communications (Awareness, Scholarly Publications)

          Publications -Study Design Terminology Paper-adjustments for submission to IJE journal, Presentations - AMIA, GES2 Special Session

          Thursday 8-9 am

          EBM Implementation Guide (a CDS EBMonFHIR sub-WG)

          IG development and ballot issues

          Thursday 9-10 am

          Computable EBM Tools Development

          Review GRADEpro-to-FEvIR Converter progress

          Friday 9-10 am

          Risk of Bias Terminology

          Review SEVCO terms (2 terms open for vote)

          Friday 10-11 am

          GRADE Ontology

          Term development (Indirectness - 1 term open for vote)

          Friday 12-1 pm

          Project Management

          Prepare Weekly Agenda

          On March 6, the Communications Working Group discussed strategies for publication of the SEVCO Study Design terminology including bundling it with the subset of Risk of Bias Terminology intended for primary articles which might be ready soon for a SEVCO version 2 release. Ideas for improvement of the introduction of the current manuscript to interest the epidemiology audience include explaining why this terminology is important for classification, making it possible to search for papers by study design, and for use with methodology checklists like CONSORT and STROBE. The EBMonFHIR Implementation guide could also include a Composition Resource profile for checklists. 

          Submissions are due soon for the November AMIA conference. We discussed including a special session which would review Making Guidelines Computable: Reflections from the Global Evidence Summit 2024.

           

          SEVCO Updates:

          On March 4, the Statistic Terminology Working Group found that the three terms open for vote last week did not receive enough votes to be approved for the code system. These terms are still open for vote. The group then defined one additional term (area under the ROC curve). There are currently 4 terms open for vote as shown below.

          Term

          Definition

          Alternative Terms
          (if any)

          Comment for application
          (if any)

          calibration slope

          A measure of calibration that is the rate of change in the appropriately transformed value per unit change of the correspondingly transformed predicted value.

          • calibration-in-the-small

          For calibration of binary outcome variables (0 or 1), the calibration slope is computed from a statistical model where the log odds of the predicted probabilities is a linear function of the empirical frequencies (logistical regression). The transformation is log odds (logit, the link function for a generalized linear model for the expected value of the outcome).

          For calibration of count outcome variables (0, 1, 2, 3, ...), the calibration slope may be computed from a statistical model where the log of the predicted mean counts is a linear function of the empirical frequencies determined by unique combinations of the covariates. The transformation is log, the link function, of the counts.

          There are other types of outcome variables for which the calibration slope may be obtained.

          Slopes further away from 1.0 indicate, at upper and lower values, over- or under-confidence in the prediction.

          measure of heterogeneity

          A statistic that represents the variation or spread among values in the set of estimates across studies.

          • measure of statistical heterogeneity

          There are several types of heterogeneity (or diversity) which are important factors which determine whether or not evidence should be pooled. Clinical heterogeneity may refer to variations in the population, intervention, comparator, or outcome. Methodological heterogeneity may refer to variations in study design. Statistical heterogeneity is described here.

          A measure of dispersion is defined as a statistic that represents the variation or spread among data values in a dataset or data distribution. In the context of a meta-analysis, a measure of heterogeneity is a measure of dispersion in which the dataset is the set of estimates across studies.

          area under the curve

          A statistic that summarizes the variation of a quantity of interest across a domain interval of interest.

          • AUC

          As examples, in classification tasks, the quantity of interest is the true positive rate, and the domain interval of interest is the false positive rate (See ROC curve); in pharmacodynamic studies, the quantity of interest is the concentration of a drug and the domain interval of interest is time. The average quantity is the area under the curve divided by the range of the domain interval of interest. In intensive care, assessing lung barotrauma the quantity of interest is pressure and the domain interval of interest is time.

          An area under the curve where the curve is the true positive rate and the range of interest is the false positive rate.

          • AUC
          • AUROC
          • area under the receiver operating characteristic curve

          The area under the ROC curve is used to assess the performance of a classifier used to distinguish between two or more groups. ROC stands for Receiver Operating Characteristic. Another term for true positive rate is sensitivity and another term for false positive rate is 1-sensitivity.

           

          On March 8, the Risk of Bias Terminology Working Group found that of the 6 terms open for vote last week, 5 terms received enough votes to be added to the code system. The 1 remaining term (search strategy limits for study report characteristics not appropriate) is still open for vote as shown below. The group then defined one additional term (study eligibility criteria not adhered to) which is now also open for vote. There are currently 2 risk of bias terms open for vote.

           

           

          Code and Voting Results From Last Week

          Term

          Definition

          Alternative Terms
          (if any)

          Comment for application
          (if any)

          SEVCO:00274

          Passed

          study eligibility criteria not appropriate for review question

          A bias in study eligibility criteria due to a mismatch between the inclusion and exclusion criteria and the research question.

          The mismatch between the inclusion and exclusion criteria and the research question may relate to differences in the population, exposures, or outcomes studies from those of interest.

          SEVCO:00276

          Passed

          study eligibility criteria limits for study characteristics not appropriate

          A bias in study eligibility criteria that is specific to restrictions based on characteristics of the study design, conduct, or findings.

          Any restrictions applied on the basis of study characteristics should not introduce a bias in study eligibility criteria. Examples of such restrictions may include criteria based on study size, study design, study quality, or date when the study was conducted.

          In the ROBIS tool used for risk of bias assessment of systematic reviews, there is a question (1.4 Were all restrictions in eligibility criteria based on study characteristics appropriate?) that is different from the one which refers to whether the eligibility criteria are appropriate to the review question. Therefore, a separate term is available in SEVCO.

          SEVCO:00264

          Passed

          non-database search sources inadequate

          A bias in search strategy in which the sources other than electronic database sources are not sufficient to find the studies available.

          The set of sources other than databases (electronic sources) expected to include the studies of interest will vary with the review topic.

          SEVCO:00265

          Passed

          search strategy not sensitive

          A bias in search strategy in which the search terms and combinations of search terms are not sufficient to find the studies available.

          The search terms and combinations of search terms expected to include the studies of interest will vary with the review topic.

          SEVCO:00266

          Still Open

          search strategy limits for study report characteristics not appropriate

          A bias resulting from search strategy criteria implemented due to practical considerations for implementation, including properties of a report or its accessibility.

          A bias in search strategy that is specific to restrictions on the date, status, structure, language, or accessibility of the study report.. Accessibility refers to where a resource is available and how one gains access to it.

          SEVCO:00360

          Passed

          incoherence among qualitative data, analysis, and interpretation

          A qualitative research bias in which there is any mismatch among hypothesis, data collected, data analysis, and results interpretation.

          The term mismatch applies to an inappropriate or wrong or inadequate relationship.

          SEVCO:00267

          Newly Open

          study eligibility criteria not adhered to

          A bias in search strategy due to incorrect implementation of the study inclusion and exclusion criteria.

          .

           

          Knowledge Ecosystem Liaison/Coordination Updates:

          On March 5, the Ontology Working Group started the process of improving the Scientific Knowledge Accelerator Foundation's SEVCO website. We will continue this work next week. 

          On March 6, the Funding the Ecosystem Infrastructure Working Group identified the opportunity to connect the dots between target focused initiatives in follow up to ACTS, which are making evidence and guidance more computable, especially care gap reports , evidence ecosystem efforts, and work done by AHRQ, putting evidence into practice in measurable ways. Topics already identified, including chronic kidney disease, sickle cell anemia, pain management, and high blood pressure would be ideal subjects. This is important work that could be funded. How can we pursue this idea in future meetings? AHRQ is a possible funding source. KDIGO or other guideline organizations might be interested in measuring the uptake of their guidelines. 

          On March 8, the GRADE Ontology working group found that the 1 term open for vote, inconsistency, received 14 votes, 9 affirmative and 5 against. After responding to comments and redrafting the definition and comment for application the term is open for voting once again. The GRADE Ontology meeting was recorded and can be viewed here. If you voted previously for this term, please vote again as the definition has changed. 

          Term

          Definition

          Alternative Terms
          (if any)

          Comment for application
          (if any)

          Indirectness

          The lack of similarities between the characteristics of the evidence and the characteristics of the question of interest.

          The question of interest may vary with context, such as the key considerations for a guideline or systematic review. Differences might include feasibility, populations, exposures, interventions, comparators, or outcomes. Indirectness is one of the domains that can impact the rating of the certainty of evidence. Related terms used by others for "directness" include "relevance", "external validity", and "applicability". Directness is the extent to which the results of an experimental or observational study apply to a target context outside of that study. Any differences may be considered indirectness. The degree of indirectness may range from trivial to important for the evidence implementation. The degree and importance of indirectness influences the rating of certainty of evidence, but not the definition of indirectness itself. In a network meta-analysis, an indirect effect estimate describes an A vs. C comparison that is derived from A vs. B studies and B vs. C studies. This use of the term "indirect" is not the intended use of Indirectness.

           

          FEvIR Platform and Tools Development Updates:

          On March 4, the CQL Development Working Group (HL7 CDS EBMonFHIR sub-WG) continued to create CQL expressions to describe inclusion and exclusion criteria from a real-world example. 

          On March 7, the Computable EBM Tools Development Working Group briefly discussed progress on the  GRADEpro-to-FEvIR Converter software and then discussed participation in upcoming meetings including the EBMonFHIR Track at the HL7 Connectathon in Dallas, TX in May and HL7 FHIR DevDays in Minneapolis, MN June 10-13.

           

          HL7 Standards Development Updates:

          On March 5, the StatisticsOnFHIR WG (a CDS EBMonFHIR sub-WG) continued to work on a real-world example and developed recommendations for a pragmatic user interface that would utilize the concept of a shell table, offering this to the user after the protocol has been entered. 

          On March 7, the EBM Implementation Guide Working Group (a CDS EBMonFHIR sub-WG) downloaded additional software for IG development and discussed feedback from the IG ballot. We then created https://jira.hl7.org/browse/FHIR-44916 requesting 

          GroupAssignment and ParticipantFlowMeasure examples for EvidenceVariable. 

           

          Research Development Updates:

          On March 5, the Measuring the Rate of Scientific Knowledge Transfer Working Group reviewed improvements in the software created for the project and suggested that the administrative user interface should be subordinate to the investigator user interface when looking at the administrator screen. We also decided to repeat the pilot with 3 articles as a trial run for the software updates. 

           

          Releases on the FEvIR Platform:

          Computable Publishing®: Comparative Evidence Report Authoring Tool version 0.16.0 (March 5, 2024) creates and displays a Composition Resource with a ComparativeEvidenceReport Profile.

          Release 0.16.0 (March 5, 2024) now says the resource type for the "Resource Reference" labels.

           

          Quotes for thought: 

          "I don't say she's lyin', Mr. Gilmer, I say she's mistaken in her mind" – Tom Robinson, To Kill a Mockingbird

           “The road to success is always under construction" – Lily Tomlin

          "I am extraordinarily patient, provided I get my own way in the end." — Margaret Thatcher

           “The question isn’t who is going to let me; it’s who is going to stop me.” —Ayn Rand 

           "A Dream is a wish the heart makes" – from the Walt Disney film, "Cinderella"

           

          To get involved or stay informed: HEvKA Project Page on FEvIR Platform, HEvKA Project Page on HL7 Confluence, or join any of the groups that are now meeting in the following weekly schedule:

          Joanne Dehnbostel

          unread,
          Mar 12, 2024, 1:24:48 AM3/12/24
          to Health Evidence Knowledge Accelerator (HEvKA)

           

           

          7 people (BA, CE , HL, JD, KS, KW, MA) participated today in up to 4 active working group meetings. 

          The Project Management Working Group made changes, based on reviewer comments, to a manuscript already submitted to Clinical and Public Health Guidelines (The new journal by Guidelines International). We then looked over a discussion in the HL7 chat  https://chat.fhir.org/#narrow/stream/179166-implementers/topic/Group.20characteristic.20valueset.20comparison regarding creating a standard method for using valuesets to define the values for characteristics in a Group Resource. The conversation then turned to a call from NIH seeking ideas on the use of common data elements. https://nexus.od.nih.gov/all/2024/02/28/seeking-ideas-on-using-common-data-elements-for-nih-supported-clinical-research/ We decided to create some examples for demographics (i.e. marital status). 

          The next item of business was the following  tracker item, https://jira.hl7.org/browse/FHIR-31867 , to create an EvidenceReport profile of Composition and deprecate the EvidenceReport resource, which represents a major advance in EBMonFHIR simplification. 

          The Setting the Scientific Record on FHIR Working Group worked on development of the GRADEpro-to-FEvIR software. Now all resources created from GRADEpro-to-FEvIR Converter (other than the Conversion Report Composition) should have a RelatedArtifact entry (element or extension) with type = part-of and resourceReference to the Conversion Report Composition

          The CQL Development Working Group (a CDS EBMonFHIR sub-WG) also discussed the call from NIH seeking ideas on the use of common data elements. https://nexus.od.nih.gov/all/2024/02/28/seeking-ideas-on-using-common-data-elements-for-nih-supported-clinical-research/. We decided to create some examples for demographics (i.e. marital status).

           

          Today the Statistic Terminology Working Group found 3 terms approved (calibration slopemeasure of heterogeneityarea under the curve), 1 term is still open with limited votes (area under the ROC curve), the group also drafted a re-ordering of the list of statistic terms.

          Term

          Definition

          Alternative Terms
          (if any)

          Comment for application
          (if any)

          A measure of calibration that is the rate of change in the appropriately transformed value per unit change of the correspondingly transformed predicted value.

          • calibration-in-the-small

          For calibration of binary outcome variables (0 or 1), the calibration slope is computed from a statistical model where the log odds of the predicted probabilities is a linear function of the empirical frequencies (logistical regression). The transformation is log odds (logit, the link function for a generalized linear model for the expected value of the outcome).

          For calibration of count outcome variables (0, 1, 2, 3, ...), the calibration slope may be computed from a statistical model where the log of the predicted mean counts is a linear function of the empirical frequencies determined by unique combinations of the covariates. The transformation is log, the link function, of the counts.

          There are other types of outcome variables for which the calibration slope may be obtained.

          Slopes further away from 1.0 indicate, at upper and lower values, over- or under-confidence in the prediction.

          A statistic that represents the variation or spread among values in the set of estimates across studies.

          • measure of statistical heterogeneity

          There are several types of heterogeneity (or diversity) which are important factors which determine whether or not evidence should be pooled. Clinical heterogeneity may refer to variations in the population, intervention, comparator, or outcome. Methodological heterogeneity may refer to variations in study design. Statistical heterogeneity is described here.

          A measure of dispersion is defined as a statistic that represents the variation or spread among data values in a dataset or data distribution. In the context of a meta-analysis, a measure of heterogeneity is a measure of dispersion in which the dataset is the set of estimates across studies.

          A statistic that summarizes the variation of a quantity of interest across a domain interval of interest.

          • AUC

          As examples, in classification tasks, the quantity of interest is the true positive rate, and the domain interval of interest is the false positive rate (See ROC curve); in pharmacodynamic studies, the quantity of interest is the concentration of a drug and the domain interval of interest is time. The average quantity is the area under the curve divided by the range of the domain interval of interest. In intensive care, assessing lung barotrauma the quantity of interest is pressure and the domain interval of interest is time.

          Still Open

          An area under the curve where the curve is the true positive rate and the range of interest is the false positive rate.

          • AUC
          • AUROC
          • area under the receiver operating characteristic curve

          The area under the ROC curve is used to assess the performance of a classifier used to distinguish between two or more groups. ROC stands for Receiver Operating Characteristic. Another term for true positive rate is sensitivity and another term for false positive rate is 1-sensitivity.

          To participate you can join the Scientific Evidence Code System (SEVCO) Expert Working Group at https://fevir.net/resources/Project/27845.

           

          Releases on the FEvIR Platform:

          Release 0.6.2 (March 11, 2024) the builders no longer crash if there's no section element in the JSON.

          • Computable Publishing®: Recommendation Authoring Tool version 0.12.2 (March 11, 2024) creates a Composition Resource with a Recommendation Profile and the associated Resources for a structured representation of a recommendation.
            • Release 0.12.2 (March 11, 2024) the builders no longer crash if there's no section element in the JSON.

           

          Quote for thought "Home is the nicest place there is." --Laura Ingalls Wilder

          Joanne Dehnbostel

          unread,
          Mar 13, 2024, 2:08:16 AM3/13/24
          to Health Evidence Knowledge Accelerator (HEvKA)

           

           

           

          7 people (BA, CE , HL, JD, KS, KW, RC) participated today in up to 3 active working group meetings. 

          The Measuring the Rate of Scientific Knowledge Transfer Working Group reviewed the first results from the pilot study and suggested several changes to the user interface of the RADAR tool.

          The StatisticsOnFHIR Working Group (a CDS EBMonFHIR sub-WG) discussed strategies to overcome the manual data entry burden when converting scientific data into FHIR format. 

          The Ontology Management Working Group talked about creating a standard method for adding language translations to the GRADE ontology.

          The group then discussed creating a Scientific Evidence Code System (SEVCO) version 2 which would include a subset of risk of bias terms.

          This version of SEVCO could then be included in the paper that was previously going to cover only the study design part of the code system.

          The group also discussed revitalizing the connection between SEVCO and the STATO code system.  

          Quote for thought: "Make every effort to change things you do not like. If you cannot make a change, change the way you have been thinking. You might find a new solution." --Maya Angelou

          Joanne Dehnbostel

          unread,
          Mar 14, 2024, 7:59:11 AM3/14/24
          to Health Evidence Knowledge Accelerator (HEvKA)

           

          6 people (BA, CE , JD, JO, KS, MA) participated today in up to 2 active working group meetings.

          The Funding the Ecosystem Infrastructure Working Group discussed the current state of multiple collaborations and funding pathways which could provide resources for "Making Science Computable". 

          The Communications Working Group created a draft proposal for an invited special panel session at the Global Evidence Summit 2024 to be presented in Prague in September and discussed a proposal for the upcoming AMIA Conference which has a deadline of March 18.  We will continue to discuss both proposals in the remaining HEvKA meetings this week. 

          Releases on the FEvIR Platform: 

          The FEvIR®: RAte of Dissemination Assessment Research (RADAR) Tool Version 0.10.0 (March 13, 2024) facilitates research conduct and project management for an investigation to measure the rate of scientific knowledge transfer.

          Release 0.10.0 (March 13, 2024) CTRL + S now saves your work on many of the modals. The close out message that appears when the user tries to close out without saving their work has been reworded and "Save Changes" button has been added to it. The green button with the checkmark has been removed from the modals because the functionality of it wasn't intuitive.

          Quote for thought: “March bustles in on windy feet and sweeps my doorstep and my street.”― Susan Reiner

          Joanne Dehnbostel

          unread,
          Mar 15, 2024, 2:34:04 PM3/15/24
          to Health Evidence Knowledge Accelerator (HEvKA)

           

           

           

          9 people (AN, BA, CE , CW, IK, JD, KR, KS, MH) participated today in up to 2 active working group meetings.

          The EBM Implementation Guide Working Group (a CDS EBMonFHIR sub-WG) hosted the monthly GINTech meeting. The group edited a proposal for an invited special session for the Global Evidence Summit 2024 which will take place in September. The group then reviewed progress on the GRADEpro to FHIR software tool on the FEvIR Platform. The results of one conversion from GRADEpro to FHIR can be seen at https://fevir.net/resources/composition/207585. The meeting was recorded and can be viewed here.

          The Computable EBM Tools Development Working Group reviewed and submitted the proposal for the Global Evidence Summit 2024 described above and discussed plans to submit a proposal for the AMIA conference which will occur in San Francisco, CA later this year. The group then discussed how to display absolute and relative statistics in a Summary of Findings Table. 

          Releases on the FEvIR Platform:

          GRADEpro-to-FEvIR Converter Tool Version 0.1.0 (March 14, 2024) converts data from GRADEpro into FEvIR Resources in FHIR® JSON.

          Release 0.1.0 (March 14, 2024)  introduces a new Converter Tool that allows the entry of an identifier specific to a GRADEpro profile, and returns output in the form of a FHIR Composition Resource (GRADEpro Conversion Report) with associated FHIR ArtifactAssessment, Composition, Evidence, EvidenceVariable and Group Resources.

          The FEvIR®: RAte of Dissemination Assessment Research (RADAR) Tool Version 0.11.0 (March 14, 2024) facilitates research conduct and project management for an investigation to measure the rate of scientific knowledge transfer.

          Release 0.11.0 (March 14, 2024) for administrators of an investigation, the Load/Change Articles buttons that are only accessible to them are now separated to their own section on the page.

          Quote for Thought: "We are here to add what we can to life, not to get what we can from life."–William Osler

          Joanne Dehnbostel

          unread,
          Mar 16, 2024, 10:54:15 PM3/16/24
          to Health Evidence Knowledge Accelerator (HEvKA)

          11 people (BA, CA-D, HK, JB, JD, KS, KW, MA, SM, SS, TD) participated today in up to 3 active working group meetings.

          On March 15, the Risk of Bias Terminology Working Group found that of the 2 terms open for vote last week, 1 term (search strategy limits for study report characteristics not appropriate) , received enough votes to be added to the code system. The 1 remaining term (study eligibility criteria not adhered to) and a sibling term (error in study selection not minimized) were moved in the hierarchy to reflect their role in selection bias. This is consistent with questions that appear in the ROBIS Tool which was the original inspiration for the terms. The group will work to define these terms next week, so there are no terms open for vote this week. 

          Code and Voting Results From Last Week

          Term

          Definition

          Alternative Terms
          (if any)

          Comment for application
          (if any)

          SEVCO:00266

          Passed

          search strategy limits for study report characteristics not appropriate

          A bias resulting from search strategy criteria implemented due to practical considerations for implementation, including properties of a report or its accessibility.

          A bias in search strategy that is specific to restrictions on the date, status, structure, language, or accessibility of the study report.. Accessibility refers to where a resource is available and how one gains access to it.

          SEVCO:00267

          Not Open, pending revision

          study eligibility criteria not adhered to

          A bias in search strategy due to incorrect implementation of the study inclusion and exclusion criteria.

          .

          On March 15, the GRADE Ontology Working Group found that the 1 term open for vote, inconsistency, received 9 votes, 6 affirmative and 3 against. After responding to comments and redrafting the definition and comment for application the term is open for voting once again. The GRADE Ontology meeting was recorded and can be viewed here.  If you voted previously for this term, please vote again as the definition has changed. 

          Term

          Definition

          Alternative Terms
          (if any)

          Comment for application
          (if any)

          Indirectness

          Differences between the characteristics of the evidence and the intended target application.

          Differences might include feasibility, populations, exposures, interventions, comparators, or outcomes. The intended target application, also called the question of interest, may vary with context, such as the key considerations for a guideline or systematic review. Indirectness is one of the domains that can impact the rating of the certainty of evidence. The degree and importance of indirectness influences the rating of certainty of evidence, but not the definition of indirectness itself. Related terms used by others for "directness" include "relevance", "external validity", and "applicability". Directness is the extent to which the results of an experimental or observational study apply to a target context outside of that study. In a network meta-analysis, an indirect effect estimate describes an A vs. C comparison that is derived from A vs. B studies and B vs. C studies. This use of the term "indirect" is not the intended use of Indirectness.

           

          The Project Management Working Group prepared the agenda for next week:

          Day/Time (Eastern)

          Working Group

          Agenda Items

          Monday 8-9 am

          Project Management

          FHIR changes and EBMonFHIR Implementation Guide issues

          Monday 9-10 am

          Setting the Scientific Record on FHIR

          Create Examples of Common Data Elements

          Monday 10-11 am

          CQL Development (a CDS EBMonFHIR sub-WG)

          Create Examples of Common Data Elements

          Monday 2-3 pm

          Statistic Terminology

          Review SEVCO terms (area under the curve terms)

          Tuesday 9 am-10 am

          Measuring the Rate of Scientific Knowledge Transfer

          Review software improvements, discuss results of pilot, methods discussion

          Tuesday 2-3 pm

          StatisticsOnFHIR (a CDS EBMonFHIR sub-WG)

          Create Examples of Common Data Elements

          Tuesday 3-4 pm

          Ontology Management

          Discuss protocol for translation of GRADE Ontology Terms

          Wednesday 8-9 am

          Funding the Ecosystem Infrastructure

          Create Presentation for Dissolve-E kickoff

          Wednesday 9-10 am

          Communications (Awareness, Scholarly Publications)

          Publications -Study Design Terminology Paper-adjustments for resubmission, Presentations

          Thursday 8-9 am

          EBM Implementation Guide (a CDS EBMonFHIR sub-WG)

          Create Baseline Measure Report Profile of Composition Resource

          Thursday 9-10 am

          Computable EBM Tools Development

          Review GRADEpro-to-FEvIR Converter progress

          Friday 9-10 am

          Risk of Bias Terminology

          Review SEVCO terms

          Friday 10-11 am

          GRADE Ontology

          Term development (Indirectness - 1 term open for vote)

          Friday 12-1 pm

          Project Management

          Prepare Weekly Agenda

          Releases on the FEvIR Platform:

          The Fast Evidence Interoperability Resources (FEvIR) Platform is available for use now, but is “pre-release”.  The current version is 0.205.2 (March 14, 2024). Viewing resources is open without login.  Signing in is free and required to create content (which can then only be edited by the person who created the content).

          Release 0.205.2 (March 14, 2024) the resource navigation buttons on the left, like the Update button, now can't be clicked if it's already submitting changes.

          The GRADEpro-to-FEvIR Converter Version 0.2.0 (March 15, 2024) converts structured data from GRADEpro to FHIR JSON.

          Release 0.2.0 (March 15, 2024) converts GRADEpro reference tags to Citation classification classifiers.

          Quote for Thought: "I'm at a place in my life when errands are starting to count as going out" --Anonymous

          Joanne Dehnbostel

          unread,
          Mar 18, 2024, 12:22:27 PM3/18/24
          to Health Evidence Knowledge Accelerator (HEvKA) Weekly Update, Health Evidence Knowledge Accelerator (HEvKA)

           

           

          Project Coordination Updates:

          20 people (AN, BA, CA-D, CE , CW, HK, HL, IK, JB, JD, JO, KR, KS, KW, MA, MH, RC, SM, SS, TD) from 9 countries (Belgium, Canada, Chile/Spain, Finland, Norway, Peru, Poland, UK, USA) participated this week in up to 14 active working group meetings.

          On March 11, the Project Management Working Group made changes, based on reviewer comments, to a manuscript already submitted to Clinical and Public Health Guidelines (The new journal by Guidelines International). We then looked over a discussion in the HL7 chat  https://chat.fhir.org/#narrow/stream/179166-implementers/topic/Group.20characteristic.20valueset.20comparison regarding creating a standard method for using valuesets to define the values for characteristics in a Group Resource. The conversation then turned to a call from NIH seeking ideas on the use of common data elements. https://nexus.od.nih.gov/all/2024/02/28/seeking-ideas-on-using-common-data-elements-for-nih-supported-clinical-research/ We decided to create some examples for demographics (i.e. marital status). 

          The next item of business was the following  tracker item, https://jira.hl7.org/browse/FHIR-31867 , to create an EvidenceReport profile of Composition and deprecate the EvidenceReport resource, which represents a major advance in EBMonFHIR simplification. 

          On March 11, the Setting the Scientific Record on FHIR Working Group worked on development of the GRADEpro-to-FEvIR software. Now all resources created from GRADEpro-to-FEvIR Converter (other than the Conversion Report Composition) should have a RelatedArtifact entry (element or extension) with type = part-of and resourceReference to the Conversion Report Composition

          On March 15, the Project Management Working Group prepared the suggested agenda for the week of March 18-22

          Day/Time (Eastern)

          Working Group

          Agenda Items

          Monday 8-9 am

          Project Management

          FHIR changes and EBMonFHIR Implementation Guide issues

          Monday 9-10 am

          Setting the Scientific Record on FHIR

          Create Examples of Common Data Elements

          Monday 10-11 am

          CQL Development (a CDS EBMonFHIR sub-WG)

          Create Examples of Common Data Elements

          Monday 2-3 pm

          Statistic Terminology

          Review SEVCO terms (area under the curve terms)

          Tuesday 9 am-10 am

          Measuring the Rate of Scientific Knowledge Transfer

          Review software improvements, discuss results of pilot, methods discussion

          Tuesday 2-3 pm

          StatisticsOnFHIR (a CDS EBMonFHIR sub-WG)

          Create Examples of Common Data Elements

          Tuesday 3-4 pm

          Ontology Management

          Discuss protocol for translation of GRADE Ontology Terms

          Wednesday 8-9 am

          Funding the Ecosystem Infrastructure

          Create Presentation for Dissolve-E kickoff

          Wednesday 9-10 am

          Communications (Awareness, Scholarly Publications)

          Publications -Study Design Terminology Paper-adjustments for resubmission, Presentations

          Thursday 8-9 am

          EBM Implementation Guide (a CDS EBMonFHIR sub-WG)

          Create Baseline Measure Report Profile of Composition Resource

          Thursday 9-10 am

          Computable EBM Tools Development

          Review GRADEpro-to-FEvIR Converter progress

          Friday 9-10 am

          Risk of Bias Terminology

          Review SEVCO terms

          Friday 10-11 am

          GRADE Ontology

          Term development (Indirectness - 1 term open for vote)

          Friday 12-1 pm

          Project Management

          Prepare Weekly Agenda

           

          On March 13, the Communications Working Group created a draft proposal for an invited special panel session at the Global Evidence Summit 2024 to be presented in Prague in September and discussed a proposal for the upcoming AMIA Conference which has a deadline of March 18.  We will continue to discuss both proposals in the remaining HEvKA meetings this week. 

          Announcement: The article "Making Science Computable" was accepted for publication in the Journal Clinical and Public Health Guidelines which is published by Guidelines International Network.

           

          SEVCO Updates:

           

          On March 11, the Statistic Terminology Working Group found 3 terms approved (calibration slopemeasure of heterogeneityarea under the curve), 1 term is still open with limited votes (area under the ROC curve), the group also drafted a re-ordering of the list of statistic terms.

          Term

          Definition

          Alternative Terms
          (if any)

          Comment for application
          (if any)

          A measure of calibration that is the rate of change in the appropriately transformed value per unit change of the correspondingly transformed predicted value.

          • calibration-in-the-small

          For calibration of binary outcome variables (0 or 1), the calibration slope is computed from a statistical model where the log odds of the predicted probabilities is a linear function of the empirical frequencies (logistical regression). The transformation is log odds (logit, the link function for a generalized linear model for the expected value of the outcome).

          For calibration of count outcome variables (0, 1, 2, 3, ...), the calibration slope may be computed from a statistical model where the log of the predicted mean counts is a linear function of the empirical frequencies determined by unique combinations of the covariates. The transformation is log, the link function, of the counts.

          There are other types of outcome variables for which the calibration slope may be obtained.

          Slopes further away from 1.0 indicate, at upper and lower values, over- or under-confidence in the prediction.

          A statistic that represents the variation or spread among values in the set of estimates across studies.

          • measure of statistical heterogeneity

          There are several types of heterogeneity (or diversity) which are important factors which determine whether or not evidence should be pooled. Clinical heterogeneity may refer to variations in the population, intervention, comparator, or outcome. Methodological heterogeneity may refer to variations in study design. Statistical heterogeneity is described here.

          A measure of dispersion is defined as a statistic that represents the variation or spread among data values in a dataset or data distribution. In the context of a meta-analysis, a measure of heterogeneity is a measure of dispersion in which the dataset is the set of estimates across studies.

          A statistic that summarizes the variation of a quantity of interest across a domain interval of interest.

          • AUC

          As examples, in classification tasks, the quantity of interest is the true positive rate, and the domain interval of interest is the false positive rate (See ROC curve); in pharmacodynamic studies, the quantity of interest is the concentration of a drug and the domain interval of interest is time. The average quantity is the area under the curve divided by the range of the domain interval of interest. In intensive care, assessing lung barotrauma the quantity of interest is pressure and the domain interval of interest is time.

          Still Open

          An area under the curve where the curve is the true positive rate and the range of interest is the false positive rate.

          • AUC
          • AUROC
          • area under the receiver operating characteristic curve

          The area under the ROC curve is used to assess the performance of a classifier used to distinguish between two or more groups. ROC stands for Receiver Operating Characteristic. Another term for true positive rate is sensitivity and another term for false positive rate is 1-sensitivity.

          To participate you can join the Scientific Evidence Code System (SEVCO) Expert Working Group at https://fevir.net/resources/Project/27845.

          On March 15, the Risk of Bias Terminology Working Group found that of the 2 terms open for vote last week, 1 term (search strategy limits for study report characteristics not appropriate) , received enough votes to be added to the code system. The 1 remaining term (study eligibility criteria not adhered to) and a sibling term (error in study selection not minimized) were moved in the hierarchy to reflect their role in selection bias. This is consistent with questions that appear in the ROBIS Tool which was the original inspiration for the terms. The group will work to define these terms next week, so there are no terms open for vote this week. 

          Code and Voting Results From Last Week

          Term

          Definition

          Alternative Terms
          (if any)

          Comment for application
          (if any)

          SEVCO:00266

          Passed

          search strategy limits for study report characteristics not appropriate

          A bias resulting from search strategy criteria implemented due to practical considerations for implementation, including properties of a report or its accessibility.

          A bias in search strategy that is specific to restrictions on the date, status, structure, language, or accessibility of the study report.. Accessibility refers to where a resource is available and how one gains access to it.

          SEVCO:00267

          Not Open, pending revision

          study eligibility criteria not adhered to

          A bias in search strategy due to incorrect implementation of the study inclusion and exclusion criteria.

          .

          To participate you can join the Scientific Evidence Code System (SEVCO) Expert Working Group at https://fevir.net/resources/Project/27845.

           

           

          Knowledge Ecosystem Liaison/Coordination Updates:

          On March 12, the Ontology Management Working Group talked about creating a standard method for adding language translations to the GRADE ontology.

          The group then discussed creating a Scientific Evidence Code System (SEVCO) version 2 which would include a subset of risk of bias terms.

          This version of SEVCO could then be included in the paper that was previously going to cover only the study design part of the code system.

          The group also discussed revitalizing the connection between SEVCO and the STATO code system.  

          In the next Ontology Management Working Group Meeting We will discuss a protocol for language translation of GRADE Ontology Terms.

          On March 13, the Funding the Ecosystem Infrastructure Working Group discussed the current state of multiple collaborations and funding pathways which could provide resources for "Making Science Computable". 

          On March 15, the GRADE Ontology Working Group found that the 1 term open for vote, inconsistency, received 9 votes, 6 affirmative and 3 against. After responding to comments and redrafting the definition and comment for application the term is open for voting once again. The GRADE Ontology meeting was recorded and can be viewed here.  If you voted previously for this term, please vote again as the definition has changed. 

          Term

          Definition

          Alternative Terms
          (if any)

          Comment for application
          (if any)

          Indirectness

          Differences between the characteristics of the evidence and the intended target application.

          Differences might include feasibility, populations, exposures, interventions, comparators, or outcomes. The intended target application, also called the question of interest, may vary with context, such as the key considerations for a guideline or systematic review. Indirectness is one of the domains that can impact the rating of the certainty of evidence. The degree and importance of indirectness influences the rating of certainty of evidence, but not the definition of indirectness itself. Related terms used by others for "directness" include "relevance", "external validity", and "applicability". Directness is the extent to which the results of an experimental or observational study apply to a target context outside of that study. In a network meta-analysis, an indirect effect estimate describes an A vs. C comparison that is derived from A vs. B studies and B vs. C studies. This use of the term "indirect" is not the intended use of Indirectness.

           

           

          FEvIR Platform and Tools Development Updates:

          On March 11, the CQL Development Working Group (a CDS EBMonFHIR sub-WG) also discussed the call from NIH seeking ideas on the use of common data elements. https://nexus.od.nih.gov/all/2024/02/28/seeking-ideas-on-using-common-data-elements-for-nih-supported-clinical-research/. We decided to create some examples for demographics (i.e. marital status).

          On March 14, the Computable EBM Tools Development Working Group reviewed and submitted the proposal for the Global Evidence Summit 2024 described above and discussed plans to submit a proposal for the AMIA conference which will occur in San Francisco, CA later this year. The group then discussed how to display absolute and relative statistics in a Summary of Findings Table. 

           

          HL7 Standards Development Updates:

          On March 12, the StatisticsOnFHIR Working Group (a CDS EBMonFHIR sub-WG) discussed strategies to overcome the manual data entry burden when converting scientific data into FHIR format. 

          On March 14, the EBM Implementation Guide Working Group (a CDS EBMonFHIR sub-WG) hosted the monthly GINTech meeting. The group edited a proposal for an invited special session for the Global Evidence Summit 2024 which will take place in September. The group then reviewed progress on the GRADEpro to FHIR software tool on the FEvIR Platform. The results of one conversion from GRADEpro to FHIR can be seen at https://fevir.net/resources/composition/207585. The meeting was recorded and can be viewed here.

           

          Research Development Updates:

          On March 12, the Measuring the Rate of Scientific Knowledge Transfer Working Group reviewed the first results from the pilot study and suggested several changes to the user interface of the RADAR tool.

           

          Releases on the FEvIR Platform:

          The Fast Evidence Interoperability Resources (FEvIR) Platform is available for use now, but is “pre-release”.  The current version is 0.205.2 (March 15, 2024). Viewing resources is open without login.  Signing in is free and required to create content (which can then only be edited by the person who created the content).

          Release 0.205.2 (March 15, 2024) the resource navigation buttons on the left, like the Update button, now can't be clicked if it's already submitting changes.

          GRADEpro-to-FEvIR Converter Tool  Version 0.2.0 (March 15, 2024) converts structured data from GRADEpro to FHIR JSON.

          Release 0.1.0 (March 14, 2024)  introduces a new Converter Tool that allows the entry of an identifier specific to a GRADEpro profile, and returns output in the form of a FHIR Composition Resource (GRADEpro Conversion Report) with associated FHIR ArtifactAssessment, Composition, Evidence, EvidenceVariable and Group Resources.

          Release 0.2.0(March 15, 2024) converts GRADEpro reference tags to Citation classification classifiers

          The FEvIR®: RAte of Dissemination Assessment Research (RADAR) Tool Version 0.11.0 (March 14, 2024) facilitates research conduct and project management for an investigation to measure the rate of scientific knowledge transfer.

          Release 0.10.0 (March 13, 2024) CTRL + S now saves your work on many of the modals. The close out message that appears when the user tries to close out without saving their work has been reworded and "Save Changes" button has been added to it. The green button with the checkmark has been removed from the modals because the functionality of it wasn't intuitive.

          Release 0.11.0 (March 14, 2024) for administrators of an investigation, the Load/Change Articles buttons that are only accessible to them are now separated to their own section on the page.

          Computable Publishing®: M11 Report Authoring Tool version 0.4.1 (March 11, 2024) creates and displays a Composition Resource with an M11Report Profile.

          Release 0.4.1 (March 11, 2024) the builders no longer crash if there's no section element in the JSON.

          Computable Publishing®: Comparative Evidence Report Authoring Tool version 0.16.1.0 (March 11, 2024) creates and displays a Composition Resource with a ComparativeEvidenceReport Profile.

          Release 0.16.1 (March 11, 2024) the builders no longer crash if there's no section element in the JSON.

          Computable Publishing®: Guideline Authoring Tool version 0.6.2 (March 11, 2024) creates a Composition Resource with a Guideline Profile

          Release 0.6.2 (March 11, 2024) the builders no longer crash if there's no section element in the JSON.

          Computable Publishing®: Recommendation Authoring Tool version 0.12.2 (March 11, 2024) creates a Composition Resource with a Recommendation Profile and the associated Resources for a structured representation of a recommendation.

          Release 0.12.2 (March 11, 2024) the builders no longer crash if there's no section element in the JSON.

           

          Quotes for thought: 

           

          "Home is the nicest place there is." --Laura Ingalls Wilder

          "Make every effort to change things you do not like. If you cannot make a change, change the way you have been thinking. You might find a new solution." --Maya Angelou

          “March bustles in on windy feet and sweeps my doorstep and my street.”― Susan Reiner

          "We are here to add what we can to life, not to get what we can from life."–William Osler

          "I'm at a place in my life when errands are starting to count as going out" --Anonymous

           

          To get involved or stay informed: HEvKA Project Page on FEvIR Platform, HEvKA Project Page on HL7 Confluence, or join any of the groups that are now meeting in the following weekly schedule:

          Joanne Dehnbostel

          unread,
          Mar 19, 2024, 7:12:11 AM3/19/24
          to Health Evidence Knowledge Accelerator (HEvKA)

           

           

           

          7 people (BA, CE , JD, KS, KW, MA, PR-S) participated today in up to 4 active working group meetings.

          All three working groups this morning including the Project Management Working Group, the Setting The Scientific Record on FHIR Working Group , and the CQL Development Working Group (a CDS EBMonFHIR sub-WG) , started to respond to a Request For Information (RFI) from NIH https://nexus.od.nih.gov/all/2024/02/28/seeking-ideas-on-using-common-data-elements-for-nih-supported-clinical-research/. This response will include examples of Common Data Elements. Three examples were created in FHIR Evidence variable Resources today:

          CDE Collective: Marital Status

          CDE NINDS: Marital status code

          CDE ScHARe: Marital Status

          The CQL Development Working Group (a CDS EBMonFHIR sub-WG) also continued the effort to convert example inclusion and exclusion criteria to CQL.

          The Statistic Terminology Working Group found that the term that was open for vote last week (area under the ROC curve) received enough votes to be accepted into the code system. Two additional terms were defined in the meeting today (c-statisticpartial area under the ROC curve). There are currently 2 terms open for voting. 

           Voting Results

          Term

          Definition

          Alternative Terms
          (if any)

          Comment for application
          (if any)

          Newly Open For Vote

          c-statistic

          Area under the ROC curve calculated with the full range of possible values for true positive rate and false positive rate.

          Area under the ROC curve is defined as an area under the curve where the curve is the true positive rate and the range of interest is the false positive rate. ROC stands for Receiver Operating Characteristic. The area under the ROC curve is used to assess the performance of a classifier used to distinguish between two or more groups. Another term for true positive rate is sensitivity and another term for false positive rate is 1-sensitivity.

          Newly Open For Vote

          partial area under the ROC curve

          Area under the ROC curve calculated with a specified portion the full range of possible values for true positive rate and false positive rate.

          Area under the ROC curve is defined as an area under the curve where the curve is the true positive rate and the range of interest is the false positive rate. ROC stands for Receiver Operating Characteristic. The area under the ROC curve is used to assess the performance of a classifier used to distinguish between two or more groups. Another term for true positive rate is sensitivity and another term for false positive rate is 1-sensitivity.

          Passed

          An area under the curve where the curve is the true positive rate and the range of interest is the false positive rate.

          • AUC
          • AUROC
          • area under the receiver operating characteristic curve

          The area under the ROC curve is used to assess the performance of a classifier used to distinguish between two or more groups. ROC stands for Receiver Operating Characteristic. Another term for true positive rate is sensitivity and another term for false positive rate is 1-sensitivity.

          The Group also discussed the collaboration with the Statistics Ontology , STATO.

          Releases on the FEvIR Platform:

          The Fast Evidence Interoperability Resources (FEvIR) Platform is available for use now, but is “pre-release”.  The current version is 0.206.0 (March 18, 2024). Viewing resources is open without login.  Signing in is free and required to create content (which can then only be edited by the person who created the content).

          Release 0.206.0 (March 18, 2024) adds a "GRADEpro to FEvIR Converter" button to the home page.

          The GRADEpro-to-FEvIR Converter Version 0.3.0 (March 18, 2024) converts structured data from GRADEpro to FHIR JSON.

          Release 0.3.0 (March 18, 2024) converts all recommendation justification concepts to complete all parts of the GRADEpro conversion.

           

          Quote for thought: "If you hear a voice within you say you cannot paint, then by all means paint and that voice will be silenced." Vincent van Gogh

          Joanne Dehnbostel

          unread,
          Mar 21, 2024, 12:02:21 AM3/21/24
          to Health Evidence Knowledge Accelerator (HEvKA)

           

           

          10 people (BA, CE , JB, JD, KR, KS, KW, PR-S, RC, SM) participated today in up to 3 active working group meetings.

          The Measuring the Rate of Scientific Knowledge Transfer Working Group reviewed advances in the development of the RADAR software since we last met. We then discussed the type of articles that should be included in our methods study. We would like to include studies in high impact journals and designate a specified period of time for the study. We noted that McMaster University keeps a list of "practice changing studies". We then discussed how to measure inter-rater reliability and how to determine the sample size needed for the study to be powered sufficiently to detect significant differences. We will finish our pilot study concurrently with the development of a protocol for the "real" methods study.

          The StatisticsOnFHIR Working Group (a CDS EBMonFHIR sub-WG) continued the discussion of possible methods to measure inter-rater reliability for the Measuring the Rate of Scientific Knowledge Transfer project. The group also continued the discussion of our SEVCO collaboration with the Statistics Ontology, STATO.

          The Ontology Management Working Group started to formulate a protocol for translation of GRADE terms into other languages. We decided to maintain a google spreadsheet for data entry and discussed the pros and cons of several ideas for the layout of the spreadsheet such as having each row represent a term or creating a whole sheet in the spreadsheet for each term. Representatives interested in translating terms into French and Spanish were present at the meeting. We discussed whether or not to include the definitions and comments for application in the translation effort and suggested that instructions for the use of the data entry spreadsheet should also be translated. We will report these ideas when we meet with the larger group for the Friday GRADE Ontology meeting and will continue the discussion in next week's Tuesday Ontology Working Group meeting. 

          Releases on the FEvIR Platform:

          The FEvIR®: RAte of Dissemination Assessment Research (RADAR) Tool Version 0.11.1 (March 19, 2024) facilitates research conduct and project management for an investigation to measure the rate of scientific knowledge transfer.

          Release 0.11.1 (March 19) the CTRL + S no longer closes the rating of articles modals.

          FEvIR®: Evidence Builder/Viewer version 0.34.0 (March 19, 2023) creates and displays an Evidence Resource.

          Release 0.34.0 (March 19, 2024) displays links to referenced Resources in the Population, Exposures, and Outcome sections when the Resources are identified with FEvIR Linking Identifiers.

          The Fast Evidence Interoperability Resources (FEvIR) Platform is available for use now, but is “pre-release”.  The current version is 0.207.0 (March 19, 2024). Viewing resources is open without login.  Signing in is free and required to create content (which can then only be edited by the person who created the content).

          Release 0.207.0 (March 19, 2024) produce a Research Study section instead of a Study Design section when creating a new Resource with the ComparativeEvidenceReport Profile.

          Computable Publishing®: Comparative Evidence Report Authoring Tool version 0.17.0 (March 19, 2024) creates and displays a Composition Resource with a ComparativeEvidenceReport Profile.

          Release 0.17.0 (March 19, 2024) supports editing and display of a Research Study section instead of a Study Design section, to match changes to the ComparativeEvidenceReport Profile in the EBMonFHIR Implementation Guide.

           

          Quote for Thought: "Spring is nature's way of saying 'Let's Party!'"–Robin Williams

          Joanne Dehnbostel

          unread,
          Mar 21, 2024, 9:10:54 AM3/21/24
          to Health Evidence Knowledge Accelerator (HEvKA)

          6 people (BA, CE , JD, JO, KS, MA) participated today in up to 2 active working group meetings.

          The Funding the Ecosystem Infrastructure Working Group Created a Presentation for the Dissolve-E kickoff. Dissolve-E is an effort to create a digital representation of clinical guidelines lead by the German Guideline Organization AWMF. There are 182 professional societies included and 850 guidelines to be digitized. The FEvIR Platform will be used to represent the guidelines; we are one of the "Innovation partners" in this effort. In the future this proposed infrastructure could also support the US National Guideline Clearinghouse and others. Dr. Brian Alper will present a short talk to the group on March 21, 2024. The slides created can be viewed here.

          The Communications Working Group continued to work on the presentation mentioned above and discussed the strategy for the publication of the SEVCO code system. The group decided to submit the Study Design terms in a separate paper including only SEVCO Version 1 as originally planned, instead of waiting to submit a paper containing additional terms when SEVCO Version 2 is ready.  

          Releases on the FEvIR Platform:

          The Fast Evidence Interoperability Resources (FEvIR) Platform is available for use now, but is “pre-release”.  The current version is 0.208.0 (March 20, 2024). Viewing resources is open without login.  Signing in is free and required to create content (which can then only be edited by the person who created the content).

          Release 0.208.0 (March 20, 2024) our text input field now has an option to be required or not, and if it's required and empty then it will be red until the user fills it in. A server-side function was added that can get a FOI list from an a FLI list, which is used on our Comparative Evidence Report Builder/Viewer pages.

          The Computable Publishing®: Comparative Evidence Report Authoring Tool version 0.18.0 (March 20, 2024) creates and displays a Composition Resource with a ComparativeEvidenceReport Profile.

          Release 0.18.0 (March 20, 2024) now loads data from other resources based on a FEvIR Linking Identifier (FLI) instead of an FOI number reference. FLIs are used by resources created by our GRADEpro conversion tool. Save changes to resource button is not enabled until a change is made. If a change is made and a title isn't given to a resource, the title field will have a red outline. The reference label was changed from "[resource type] Reference" to "Reference to [resource type] Resource"

          Quote for Thought: “The aim of medicine is to prevent disease and prolong life; the ideal of medicine is to eliminate the need of a physician.”—William J. Mayo

          Joanne Dehnbostel

          unread,
          Mar 22, 2024, 7:42:13 AM3/22/24
          to Health Evidence Knowledge Accelerator (HEvKA)

           

           

          4 people (CE, IK, JD, KS) participated today in up to 2 active working group meetings.

          The EBM Implementation Guide Working Group (a CDS EBMonFHIR sub-WG) discussed creating a Baseline Measure Report Profile of Composition Resource which included investigating non-FHIR tools in statistical software which produce a "Table 1" such as tableone https://cran.r-project.org/web/packages/tableone/vignettes/introduction.html and Table1 https://www.rdocumentation.org/packages/table1/versions/1.4.3 in R software and Table1 in Python software https://pypi.org/project/tableone/.

          The Computable EBM Tools Development Working Group reviewed progress on the GRADEpro-to-FEvIR Converter software tool which is now available on the FEvIR Platform. You can now test the GRADEpro-to-FEvIR Converter software tool by going to the main page on the FEvIR platform https://fevir.net and clicking the button for the GRADEpro-to-FEvIR tool. To do the conversion you will need a GRADEpro file ID number (example 64AF970C-9665-2F07-BFD3-EB4E658C5706). 

          Other news:

          Dr. Brian Alper delivered a talk for the Dissolve-E kickoff.  Dissolve-E is an effort to create a digital representation of clinical guidelines lead by the German Guideline Organization AWMF. There are 182 professional societies included and 850 guidelines to be digitized. The FEvIR Platform will be used to represent the guidelines; we are one of the "Innovation partners" in this effort. In the future this proposed infrastructure could also support the US National Guideline Clearinghouse and others.  The slides from his talk can be viewed here.

          Releases on the FEvIR Platform:

          The Computable Publishing®: Comparative Evidence Report Authoring Tool version 0.19.0 (March 21, 2024) creates and displays a Composition Resource with a ComparativeEvidenceReport Profile.

          Release 0.19.0 (March 21, 2014) displays selected data from referenced ResearchStudy and Evidence Resources.

          Quote for thought: "Happiness is when 'me myself and I' are all aligned."-- Sometimes attributed to Mel Robbins

          janice tufte

          unread,
          Mar 22, 2024, 2:21:16 PM3/22/24
          to Health Evidence Knowledge Accelerator (HEvKA), Joanne Dehnbostel
          Exciting to see the funding the infrastructure of guidelines moving forward! 



          Joanne Dehnbostel

          unread,
          Mar 23, 2024, 12:16:55 PM3/23/24
          to Health Evidence Knowledge Accelerator (HEvKA)

           

           

          11 people (AI, BA, CA-D, HK, JD, KS, KW, MA, PW, SM, TD) participated today in up to 3 active working group meetings. 

          The Risk of Bias Terminology Working Group re-drafted the definition of (study eligibility criteria not adhered to) which was recently moved under selection bias in the hierarchy. The group then defined two additional terms (publication biaserror in study selection not minimized). There are currently 3 terms open for vote.

           

          Term

          Definition

          Alternative Terms
          (if any)

          Comment for application
          (if any)

          publication bias

          A study selection bias in which the available studies are not representative of the existing studies.

          Studies means research results in any form. Available studies means the studies are publicly available in any form. Existing studies includes available studies and studies that are not publicly available in any form.

          A study selection bias due to incorrect implementation of the study inclusion and exclusion criteria.

          error in study selection not minimized

          A study selection bias due to an inadequate process for screening and evaluating potentially eligible studies.

          An adequate process for screening and evaluating potentially eligible studies should include at least two independent reviewers for any steps that involve subjective

          The GRADE Ontology Working Group discussed the term (Indirectness) which was out for vote last week, based on the feedback received, the comment for application was re-drafted and the term has been re-opened for vote. The group also defined two new indirectness terms (Indirectness in populationIndirectness in exposure). There are currently 3 terms open for vote. The group will be discussing translating the terms into other languages in our Ontology Management Working Group on Tuesday from 3 to 4 pm Eastern time. 

           

          Indirectness

          Differences between the characteristics of the evidence and the intended target application.

          Indirectness is one of the domains that can impact the rating of the certainty of evidence. Differences between the characteristics of the evidence and the intended target application might include setting, populations, exposures, interventions, comparators, or outcomes. The degree and importance of these differences will vary according to context, such as the key considerations for a guideline or draftedsystematic review. The "intended target application" may also be referred to as the "question of interest".

          Related terms used by others for "directness" include "relevance", "external validity", and "applicability". Directness is the extent to which the results of an experimental or observational study apply to a target context outside of that study.

          The term Indirectness is not used to describe indirect comparisons created by chaining a series of direct comparisons, such as those used for generating indirect effect estimates in network meta-analyses.

          Indirectness in population

          Indirectness related to the population.

          Indirectness in population means there are differences between the populations in the relevant research studies and the population under consideration in a question of interest. Decisions regarding indirectness of patients or populations may depend on an understanding of whether biological or social factors are sufficiently different that one might expect substantial differences in the magnitude of effect. The question of interest may vary with context, such as the key considerations for a guideline or systematic review.

          Indirectness in exposure

          Indirectness related to the intervention(s) or exposure(s).

          • Indirectness in intervention

          Indirectness in exposure means there are differences between the interventions or exposures in the relevant research studies and the interventions or exposures under consideration in a question of interest.

          Decisions regarding indirectness of interventions or exposures may depend on an understanding of whether intensity, setting, or social factors are sufficiently different that one might expect substantial differences in the magnitude of effect. The question of interest may vary with context, such as the key considerations for a guideline or systematic review.

           

          The Project Management Working Group created the suggested Agenda for next week:

          Day/Time (Eastern)

          Working Group

          Agenda Items

          Monday 8-9 am

          Project Management

          FHIR changes and EBMonFHIR Implementation Guide issues

          Monday 9-10 am

          Setting the Scientific Record on FHIR

          Create Examples of Common Data Elements

          Monday 10-11 am

          CQL Development (a CDS EBMonFHIR sub-WG)

          Review article on intermediate query format

          Monday 2-3 pm

          Statistic Terminology

          Review SEVCO terms (2 area under the curve terms open for vote)

          Tuesday 9 am-10 am

          Measuring the Rate of Scientific Knowledge Transfer

          SKAF Board Meeting

          Tuesday 2-3 pm

          StatisticsOnFHIR (a CDS EBMonFHIR sub-WG)

          Create Examples of Common Data Elements

          Tuesday 3-4 pm

          Ontology Management

          Discuss protocol for translation of GRADE Ontology Terms, STATO status

          Wednesday 8-9 am

          Funding the Ecosystem Infrastructure

          Create product launch readiness checklist for making guidelines computable

          Wednesday 9-10 am

          Communications (Awareness, Scholarly Publications)

          Publications -Study Design Terminology Paper-adjustments for resubmission, Presentations

          Thursday 8-9 am

          EBM Implementation Guide (a CDS EBMonFHIR sub-WG)

          Create Baseline Measure Report Profile of Composition Resource

          Thursday 9-10 am

          Computable EBM Tools Development

          Review progress across Composition authoring and viewing tools

          Friday 9-10 am

          Risk of Bias Terminology

          Review SEVCO terms (3 study selection bias terms open for vote)

          Friday 10-11 am

          GRADE Ontology

          Term development (3 Indirectness terms open for vote)

          Friday 12-1 pm

          Project Management

          Prepare Weekly Agenda

           

          Releases on the FEvIR Platform:

          Computable Publishing®: Recommendation Authoring Tool version 0.13.0 (March 22, 2024) creates a Composition Resource with a Recommendation Profile and the associated Resources for a structured representation of a recommendation. 

          Release 0.13.0 (March 22, 2024) detects related artifact entries for the "derived-from" PlanDefinition and RecommendationJustification Resources using FEvIR Linking Identifiers and adds the FEvIR Object Identifiers to preserve the function of updating these Resources with the structured data when the user makes changes to the Recommendation Resource.

           

          Quote for thought: "Stop the World – I Want to Get Off" from a 1961 musical with a book, music, and lyrics by Leslie Bricusse and Anthony Newley.

          Joanne Dehnbostel

          unread,
          Mar 25, 2024, 5:27:28 PM3/25/24
          to Health Evidence Knowledge Accelerator (HEvKA), Health Evidence Knowledge Accelerator (HEvKA) Weekly Update

           

           

          18 people (AI, BA, CA-D, CE , HK, IK, JB, JD, JO, KR, KS, KW, MA, PR-S, PW, RC, SM, TD) from 8 countries (Belgium, Canada, Chile/Spain, Finland, Norway, Peru, UK, USA) participated this week in up to 14 active working group meetings.

          Project Coordination Updates:

          The Project Management Working Group created the suggested Agenda for next week:

          Day/Time (Eastern)

          Working Group

          Agenda Items

          Monday 8-9 am

          Project Management

          FHIR changes and EBMonFHIR Implementation Guide issues

          Monday 9-10 am

          Setting the Scientific Record on FHIR

          Create Examples of Common Data Elements

          Monday 10-11 am

          CQL Development (a CDS EBMonFHIR sub-WG)

          Review article on intermediate query format

          Monday 2-3 pm

          Statistic Terminology

          Review SEVCO terms (2 area under the curve terms open for vote)

          Tuesday 9 am-10 am

          Measuring the Rate of Scientific Knowledge Transfer

          SKAF Board Meeting

          Tuesday 2-3 pm

          StatisticsOnFHIR (a CDS EBMonFHIR sub-WG)

          Create Examples of Common Data Elements

          Tuesday 3-4 pm

          Ontology Management

          Discuss protocol for translation of GRADE Ontology Terms, STATO status

          Wednesday 8-9 am

          Funding the Ecosystem Infrastructure

          Create product launch readiness checklist for making guidelines computable

          Wednesday 9-10 am

          Communications (Awareness, Scholarly Publications)

          Publications -Study Design Terminology Paper-adjustments for resubmission, Presentations

          Thursday 8-9 am

          EBM Implementation Guide (a CDS EBMonFHIR sub-WG)

          Create Baseline Measure Report Profile of Composition Resource

          Thursday 9-10 am

          Computable EBM Tools Development

          Review progress across Composition authoring and viewing tools

          Friday 9-10 am

          Risk of Bias Terminology

          Review SEVCO terms (3 study selection bias terms open for vote)

          Friday 10-11 am

          GRADE Ontology

          Term development (3 Indirectness terms open for vote)

          Friday 12-1 pm

          Project Management

          Prepare Weekly Agenda

           

          The Communications Working Group continued to work on the presentation mentioned above and discussed the strategy for the publication of the SEVCO code system. The group decided to submit the Study Design terms in a separate paper including only SEVCO Version 1 as originally planned, instead of waiting to submit a paper containing additional terms when SEVCO Version 2 is ready.  

          On Thursday, Dr. Brian Alper delivered a talk for the Dissolve-E kickoff.  Dissolve-E is an effort to create a digital representation of clinical guidelines lead by the German Guideline Organization AWMF. There are 182 professional societies included and 850 guidelines to be digitized. The FEvIR Platform will be used to represent the guidelines; we are one of the "Innovation partners" in this effort. In the future this proposed infrastructure could also support the US National Guideline Clearinghouse and others.  The slides from his talk can be viewed here.

          SEVCO Updates:

          The Statistic Terminology Working Group found that the term that was open for vote last week (area under the ROC curve) received enough votes to be accepted into the code system. Two additional terms were defined in the meeting today (c-statisticpartial area under the ROC curve). There are currently 2 terms open for voting. 

           Voting Results

          Term

          Definition

          Alternative Terms
          (if any)

          Comment for application
          (if any)

          Newly Open For Vote

          c-statistic

          Area under the ROC curve calculated with the full range of possible values for true positive rate and false positive rate.

          Area under the ROC curve is defined as an area under the curve where the curve is the true positive rate and the range of interest is the false positive rate. ROC stands for Receiver Operating Characteristic. The area under the ROC curve is used to assess the performance of a classifier used to distinguish between two or more groups. Another term for true positive rate is sensitivity and another term for false positive rate is 1-sensitivity.

          Newly Open For Vote

          partial area under the ROC curve

          Area under the ROC curve calculated with a specified portion the full range of possible values for true positive rate and false positive rate.

          Area under the ROC curve is defined as an area under the curve where the curve is the true positive rate and the range of interest is the false positive rate. ROC stands for Receiver Operating Characteristic. The area under the ROC curve is used to assess the performance of a classifier used to distinguish between two or more groups. Another term for true positive rate is sensitivity and another term for false positive rate is 1-sensitivity.

          Passed

          An area under the curve where the curve is the true positive rate and the range of interest is the false positive rate.

          • AUC
          • AUROC
          • area under the receiver operating characteristic curve

          The area under the ROC curve is used to assess the performance of a classifier used to distinguish between two or more groups. ROC stands for Receiver Operating Characteristic. Another term for true positive rate is sensitivity and another term for false positive rate is 1-sensitivity.

          The Group also discussed the collaboration with the Statistics Ontology , STATO.

           

           

          The Risk of Bias Terminology Working Group re-drafted the definition of (study eligibility criteria not adhered to) which was recently moved under selection bias in the hierarchy. The group then defined two additional terms (publication biaserror in study selection not minimized). There are currently 3 terms open for vote.

           

          Term

          Definition

          Alternative Terms
          (if any)

          Comment for application
          (if any)

          A study selection bias in which the available studies are not representative of the existing studies.

          Studies means research results in any form. Available studies means the studies are publicly available in any form. Existing studies includes available studies and studies that are not publicly available in any form.

          study eligibility criteria not adhered to

          A study selection bias due to incorrect implementation of the study inclusion and exclusion criteria.

          error in study selection not minimized

          A study selection bias due to an inadequate process for screening and evaluating potentially eligible studies.

          An adequate process for screening and evaluating potentially eligible studies should include at least two independent reviewers for any steps that involve subjective

          Knowledge Ecosystem Liaison/Coordination Updates:

          The Ontology Management Working Group started to formulate a protocol for translation of GRADE terms into other languages. We decided to maintain a google spreadsheet for data entry and discussed the pros and cons of several ideas for the layout of the spreadsheet such as having each row represent a term or creating a whole sheet in the spreadsheet for each term. Representatives interested in translating terms into French and Spanish were present at the meeting. We discussed whether or not to include the definitions and comments for application in the translation effort and suggested that instructions for the use of the data entry spreadsheet should also be translated. We will report these ideas when we meet with the larger group for the Friday GRADE Ontology meeting and will continue the discussion in next week's Tuesday Ontology Working Group meeting. 

          The Funding the Ecosystem Infrastructure Working Group Created a Presentation for the Dissolve-E kickoff. Dissolve-E is an effort to create a digital representation of clinical guidelines lead by the German Guideline Organization AWMF. There are 182 professional societies included and 850 guidelines to be digitized. The FEvIR Platform will be used to represent the guidelines; we are one of the "Innovation partners" in this effort. In the future this proposed infrastructure could also support the US National Guideline Clearinghouse and others. Dr. Brian Alper will present a short talk to the group on March 21, 2024. The slides created can be viewed here.

          The GRADE Ontology Working Group discussed the term (Indirectness) which was out for vote last week, based on the feedback received, the comment for application was re-drafted and the term has been re-opened for vote. The group also defined two new indirectness terms (Indirectness in populationIndirectness in exposure). There are currently 3 terms open for vote. The group will be discussing translating the terms into other languages in our Ontology Management Working Group on Tuesday from 3 to 4 pm Eastern time. The meeting was recorded and can be viewed here.

          Indirectness

          Differences between the characteristics of the evidence and the intended target application.

          Indirectness is one of the domains that can impact the rating of the certainty of evidence. Differences between the characteristics of the evidence and the intended target application might include setting, populations, exposures, interventions, comparators, or outcomes. The degree and importance of these differences will vary according to context, such as the key considerations for a guideline or drafted systematic review. The "intended target application" may also be referred to as the "question of interest".

          Related terms used by others for "directness" include "relevance", "external validity", and "applicability". Directness is the extent to which the results of an experimental or observational study apply to a target context outside of that study.

          The term Indirectness is not used to describe indirect comparisons created by chaining a series of direct comparisons, such as those used for generating indirect effect estimates in network meta-analyses.

          Indirectness in population

          Indirectness related to the population.

          Indirectness in population means there are differences between the populations in the relevant research studies and the population under consideration in a question of interest. Decisions regarding indirectness of patients or populations may depend on an understanding of whether biological or social factors are sufficiently different that one might expect substantial differences in the magnitude of effect. The question of interest may vary with context, such as the key considerations for a guideline or systematic review.

          Indirectness in exposure

          Indirectness related to the intervention(s) or exposure(s).

          • Indirectness in intervention

          Indirectness in exposure means there are differences between the interventions or exposures in the relevant research studies and the interventions or exposures under consideration in a question of interest.

          Decisions regarding indirectness of interventions or exposures may depend on an understanding of whether intensity, setting, or social factors are sufficiently different that one might expect substantial differences in the magnitude of effect. The question of interest may vary with context, such as the key considerations for a guideline or systematic review.

           

          FEvIR Platform and Tools Development Updates:

          The StatisticsOnFHIR Working Group (a CDS EBMonFHIR sub-WG) continued the discussion of possible methods to measure inter-rater reliability for the Measuring the Rate of Scientific Knowledge Transfer project. The group also continued the discussion of our SEVCO collaboration with the Statistics Ontology, STATO.

          The Computable EBM Tools Development Working Group reviewed progress on the GRADEpro-to-FEvIR Converter software tool which is now available on the FEvIR Platform. You can now test the GRADEpro-to-FEvIR Converter software tool by going to the main page on the FEvIR platform https://fevir.net and clicking the button for the GRADEpro-to-FEvIR tool. To do the conversion you will need a GRADEpro file ID number (example 64AF970C-9665-2F07-BFD3-EB4E658C5706). 

          HL7 Standards Development Updates: 

          The CQL Development Working Group (a CDS EBMonFHIR sub-WG) also continued the effort to convert example inclusion and exclusion criteria to CQL.

          On March 18, the three morning working groups including the Project Management Working Group, the Setting The Scientific Record on FHIR Working Group , and the CQL Development Working Group (a CDS EBMonFHIR sub-WG) , all started to respond to a Request For Information (RFI) from NIH https://nexus.od.nih.gov/all/2024/02/28/seeking-ideas-on-using-common-data-elements-for-nih-supported-clinical-research/. This response will include examples of Common Data Elements. Three examples were created in FHIR Evidence variable Resources: 

          The EBM Implementation Guide Working Group (a CDS EBMonFHIR sub-WG) discussed creating a Baseline Measure Report Profile of Composition Resource which included investigating non-FHIR tools in statistical software which produce a "Table 1" such as tableone https://cran.r-project.org/web/packages/tableone/vignettes/introduction.html and Table1 https://www.rdocumentation.org/packages/table1/versions/1.4.3 in R software and Table1 in Python software https://pypi.org/project/tableone/.

          Research Development Updates:

          The Measuring the Rate of Scientific Knowledge Transfer Working Group reviewed advances in the development of the RADAR software since we last met. We then discussed the type of articles that should be included in our methods study. We would like to include studies in high impact journals and designate a specified period of time for the study. We noted that McMaster University keeps a list of "practice changing studies". We then discussed how to measure inter-rater reliability and how to determine the sample size needed for the study to be powered sufficiently to detect significant differences. We will finish our pilot study concurrently with the development of a protocol for the "real" methods study.

          Releases on the FEvIR Platform:

          The Fast Evidence Interoperability Resources (FEvIR) Platform is available for use now, but is “pre-release”.  The current version is 0.208.0 (March 20, 2024). Viewing resources is open without login.  Signing in is free and required to create content (which can then only be edited by the person who created the content).

          Release 0.206.0 (March 18, 2024) adds a "GRADEpro to FEvIR Converter" button to the home page.

          Release 0.207.0 (March 19, 2024) produce a Research Study section instead of a Study Design section when creating a new Resource with the ComparativeEvidenceReport Profile.

          Release 0.208.0 (March 20, 2024) our text input field now has an option to be required or not, and if it's required and empty then it will be red until the user fills it in. A server-side function was added that can get a FOI list from an a FLI list, which is used on our Comparative Evidence Report Builder/Viewer pages.

           

          The Computable Publishing®: Recommendation Authoring Tool version 0.13.0 (March 22, 2024) creates a Composition Resource with a Recommendation Profile and the associated Resources for a structured representation of a recommendation. 

          Release 0.13.0 (March 22, 2024) detects related artifact entries for the "derived-from" PlanDefinition and RecommendationJustification Resources using FEvIR Linking Identifiers and adds the FEvIR Object Identifiers to preserve the function of updating these Resources with the structured data when the user makes changes to the Recommendation Resource.

          The Computable Publishing®: Comparative Evidence Report Authoring Tool version 0.19.0 (March 21, 2024) creates and displays a Composition Resource with a ComparativeEvidenceReport Profile.

          Release 0.17.0 (March 19, 2024) supports editing and display of a Research Study section instead of a Study Design section, to match changes to the ComparativeEvidenceReport Profile in the EBMonFHIR Implementation Guide.

          Release 0.18.0 (March 20, 2024) now loads data from other resources based on a FEvIR Linking Identifier (FLI) instead of an FOI number reference. FLIs are used by resources created by our GRADEpro conversion tool. Save changes to resource button is not enabled until a change is made. If a change is made and a title isn't given to a resource, the title field will have a red outline. The reference label was changed from "[resource type] Reference" to "Reference to [resource type] Resource"

          Release 0.19.0 (March 21, 2014) displays selected data from referenced ResearchStudy and Evidence Resources.

          The FEvIR®: RAte of Dissemination Assessment Research (RADAR) Tool Version 0.11.1 (March 19, 2024) facilitates research conduct and project management for an investigation to measure the rate of scientific knowledge transfer.

          Release 0.11.1 (March 19) the CTRL + S no longer closes the rating of articles modals.

          FEvIR®: Evidence Builder/Viewer version 0.34.0 (March 19, 2023) creates and displays an Evidence Resource.

          Release 0.34.0 (March 19, 2024) displays links to referenced Resources in the Population, Exposures, and Outcome sections when the Resources are identified with FEvIR Linking Identifiers.

          The GRADEpro-to-FEvIR Converter Version 0.3.0 (March 18, 2024) converts structured data from GRADEpro to FHIR JSON.

          Release 0.3.0 (March 18, 2024) converts all recommendation justification concepts to complete all parts of the GRADEpro conversion.

          Quotes for Thought:

          "If you hear a voice within you say you cannot paint, then by all means paint and that voice will be silenced." Vincent van Gogh

          "Spring is nature's way of saying 'Let's Party!'"–Robin Williams

          “The aim of medicine is to prevent disease and prolong life; the ideal of medicine is to eliminate the need of a physician.”—William J. Mayo

          "Happiness is when 'me myself and I' are all aligned."-- Sometimes attributed to Mel Robbins

          "Stop the World – I Want to Get Off" from a 1961 musical with a book, music, and lyrics by Leslie Bricusse and Anthony Newley.

          To get involved or stay informed: HEvKA Project Page on FEvIR Platform, HEvKA Project Page on HL7 Confluence, or join any of the groups that are now meeting in the following weekly schedule:

          Joanne Dehnbostel

          unread,
          Mar 26, 2024, 6:46:26 PM3/26/24
          to Health Evidence Knowledge Accelerator (HEvKA)

           

           

           

           

           

          March 25

          7 people (BA, CE , HL, JD, KS, KW, MA) participated today in up to 4 active working group meetings.

          The Project Management Working Group corrected some issues with the Computable Publishing Net Effect Calculator. We also discussed methods for calculating absolute risk reduction from a meta-analysis  with a confidence interval https://www.bmj.com/content/381/bmj-2022-073141.

          The Setting the Scientific Record on FHIR Working Group modified the example of a common data element (CDE) collective EvidenceVariable Resource to more accurately represent the answer choice values (EvidenceVariable.category.valueCodeableConcept) as used in the source CDE EvidenceVariable Resources. 

          The CQL Development Working Group (a CDS EBMonFHIR sub-WG) worked to peer review an article on intermediate query format after receiving a request from a journal editor. 

          The Statistic Terminology Working Group found that the two terms open for vote last week (c-statistic, partial area under the ROC curve) did not receive enough votes to be added to the code system. They are still open for vote. The group then worked to draft definitions for two additional area under the curve terms (area under the value-by-time curve, area under the precision-recall curve) which are now also open for vote. The group then decided to remove three terms (measurement value, duration, and time to event) from the code system and began discussing the definition for absolute value, this discussion will continue next week.  There are currently 4 statistic terms open for vote. 

          Term

          Definition

          Alternative Term (if any)

          Comment for Application

          c-statistic

          Area under the ROC curve calculated with the full range of possible values for true positive rate and false positive rate.

          Area under the ROC curve is defined as an area under the curve where the curve is the true positive rate and the range of interest is the false positive rate. ROC stands for Receiver Operating Characteristic. The area under the ROC curve is used to assess the performance of a classifier used to distinguish between two or more groups. Another term for true positive rate is sensitivity and another term for false positive rate is 1-sensitivity.

          Area under the ROC curve calculated with a specified portion the full range of possible values for true positive rate and false positive rate.

          Area under the ROC curve is defined as an area under the curve where the curve is the true positive rate and the range of interest is the false positive rate. ROC stands for Receiver Operating Characteristic. The area under the ROC curve is used to assess the performance of a classifier used to distinguish between two or more groups. Another term for true positive rate is sensitivity and another term for false positive rate is 1-sensitivity.

          area under the value-by-time curve

          An area under the curve where the curve is the repeated measures of a variable over time and the domain of interest is time.

          The area under the value-by-time curve is used for pharmacokinetics, pharmacodynamics, and physiological monitoring.

          area under the precision-recall curve

          An area under the curve where the curve is the precision and the domain of interest is the recall.

          • AUPRC
          • AUCPR
          • AUC-PR

          In information retrieval, recall is a synonym for sensitivity and precision is a synonym for positive predictive value.

          Releases on the FEvIR Platform:

          The Fast Evidence Interoperability Resources (FEvIR) Platform is available for use now, but is “pre-release”.  The current version is 0.209.0 (March 25, 2024). Viewing resources is open without login.  Signing in is free and required to create content (which can then only be edited by the person who created the content).

          Release 0.209.0 (March 25, 2024) displays a line break when a single \n is found in a markdown datatype, and displays a hyperlink when a canonical datatype value is found within a RelatedArtifact datatype.

          Quote for Thought: "Anyone who has never made a mistake has never tried anything new." --Albert Einstein

          Joanne Dehnbostel

          unread,
          Mar 27, 2024, 7:54:08 AM3/27/24
          to Health Evidence Knowledge Accelerator (HEvKA)

           

           

          March 26

          15 people (BA, BK, CE , CM, HL, IK, JB, JD, KR, KS, KW, MA, RC, SL, SM) participated today in up to 3 active working group meetings.

          The Board of the Scientific Knowledge Accelerator Foundation met today. The main agenda items were strategies for fundraising, increasing public awareness of our mission, and answering a survey from Guidelines International Network. 

          The StatisticsOnFHIR Working Group (a CDS EBMonFHIR sub-WG) introduced new participants to the work we are doing to respond to a Request For Information (RFI) from NIH https://nexus.od.nih.gov/all/2024/02/28/seeking-ideas-on-using-common-data-elements-for-nih-supported-clinical-research/. This response will include examples of Common Data Elements. 

          The Ontology Management Working Group met again to discuss our protocol to translate GRADE terms from the GRADE Ontology project into other languages. We decided to limit our first efforts to translating the term itself and any alternative terms (synonyms) and worked on the layout of the spreadsheet we will use to collect the data. This discussion will continue in next week's Ontology Management meeting. 

          Quote for thought: "Give me a lever long enough and a fulcrum on which to place it, and I shall move the world." — Archimedes

          Joanne Dehnbostel

          unread,
          Mar 28, 2024, 1:13:04 AM3/28/24
          to Health Evidence Knowledge Accelerator (HEvKA)

           

           

          7 people (BA, CE , JD, KR, KS, KW, MA) participated today in up to 2 active working group meetings.

          The Funding the Ecosystem Infrastructure Working Group created a product launch readiness checklist for making guidelines computable which can be viewed here https://docs.google.com/document/d/1FcLL3Tkx87tpcmOmv-0tpKeWxI5VS8uS/edit

          The Communications Working Group worked to make minor revisions to "Making Science Computable: Introducing Evidence-Based Medicine on Fast Healthcare Interoperability Resources (EBMonFHIR)" which has been accepted for publication with the Journal of Medical Internet Research (JMIR). 

          Releases on the FEvIR Platform:

          Computable Publishing®: Summary Of Findings Authoring Tool version 0.18.0 (March 27, 2024) creates and displays a SummaryOfFindings Profile of Composition Resource.

           Release 0.18.0 (March 27, 2024) identifies FEvIR Object Identifiers (FOIs) for any referenced Resources using FEvIR Linking Identifiers, to enable function support for content creation (such as the Generate Table Row Content button to automatically generate section Narrative content from the referenced Resources) for Summary Of Finding Compositions created from conversion tools (such as Computable Publishing®: GRADEpro-to-FEvIR Converter).

          Quote for Thought: "I say luck is when an opportunity comes along and you're prepared for it." --- Denzel Washington

          Joanne Dehnbostel

          unread,
          Mar 29, 2024, 3:54:19 AM3/29/24
          to Health Evidence Knowledge Accelerator (HEvKA)

           

           

          5 people (BA, CE , IK, JD, KS) participated today in up to 2 active working group meetings.

          EBM Implementation Guide Working Group  (a CDS EBMonFHIR sub-WG) discussed a question posed by an implementer who needs to document the source for the Evidence content, including a quote of the exact phrasing in the source material. We created Jira ticket https://jira.hl7.org/browse/FHIR-45094 to add a comment to Evidence.relatedArtifact (and example) to explain how to express the source of extracted evidence.

          Computable EBM Tools Development Working Group reviewed progress across Composition authoring and viewing tools and discussed priorities for future developments. 

          Releases on the FEvIR Platform

          Computable Publishing®: ClinicalTrials.gov-to-FEvIR Converter version 4.11.3 (March 28, 2024)  converts ClinicalTrials.gov JSON to FEvIR Resources in FHIR JSON.

          Release 4.11.3 (March 28, 2024) corrects the ResearchStudy.result value from an object to an array.

          Computable Publishing® GRADEpro-to-FEvIR Converter  Version 0.3.1 (March 28, 2024) converts structured data from GRADEpro to FHIR JSON.

          Release 0.3.1 (March 28, 2024) adds an artifactReference element referencing the Composition Resource (Recommendation Profile) to the ArtifactAssessment Resource (RecommendationJustification Profile).

          Quote for Thought: "The more things change, the more they are the same." -- Alphonse Karr

          Joanne Dehnbostel

          unread,
          Mar 30, 2024, 5:39:57 PM3/30/24
          to Health Evidence Knowledge Accelerator (HEvKA)

           

           

          9 people (BA, CA-D, HL, JD, KS, KW, PW, SL, SM) participated today in up to 3 active working group meetings.

          The Risk of Bias Terminology Working Group found that each of the 3 terms open for vote last week received a negative vote. Two of the terms, study eligibility not adhered to and error in study selection not minimized, were renamed as shown below, in response to feedback from the expert working group. The terms have been revised and are reopen for vote. If you have already voted on these terms, please vote again. There are currently 3 terms open for vote.

          Term

          Definition

          Alternative Terms
          (if any)

          Comment for Application
          (if any)

          publication bias

          A study selection bias in which the publicly available studies are not representative of all conducted studies.

          The term 'studies' means evidence or research results in any form where such studies would meet the study eligibility criteria without consideration of criteria regarding the form of publication. The phrase 'publicly available studies' means the studies are available to the broad academic community and the public through established distribution channels in any form, including forms with restricted access. Established distribution channels include peer-reviewed journals, books, conference proceedings, dissertations, reports by governmental or research organizations, preprints, and study registries.

          misapplication of study eligibility criteria

          A study selection bias due to incorrect implementation of the study inclusion and exclusion criteria.

          • nonadherence to study eligibility criteria

          bias in study selection process

          A study selection bias due to an inadequate process for screening and evaluating potentially eligible studies.

          An adequate process for screening and evaluating potentially eligible studies should generally include at least two independent reviewers for any steps that involve subjective judgment.

           

          The GRADE Ontology Working Group discussed a GRADE concept paper to be written in time for presentation at the GRADE Working Group Meeting in Miami, FL on May 1. The group then found that of the three terms open for vote last week, the parent term, Indirectness, received 13 positive votes, so it has been accepted into the code system. Indirectness in population received 11 votes with one negative vote and 4 comments, Indirectness in Exposure received 11 votes with 3 negative votes and 5 comments. The group revised the term Indirectness in population which has been reopened for vote, if you previously voted on this term please vote again. The term Indirectness in Exposure will be discussed and likely reopened for vote at next week's meeting. The meeting was recorded, please request a copy of the recording if interested. 

          The group will continue our discussion regarding the translation of terms into additional languages in our Ontology Management Working Group on Tuesday from 3 to 4 pm Eastern time. 

          Term and results of voting

          Definition

          Alternative terms if any

          Comment for Application

          Indirectness

          Passed

          Differences between the characteristics of the evidence and the intended target application.

          Indirectness is one of the domains that can impact the rating of the certainty of evidence. Differences between the characteristics of the evidence and the intended target application might include setting, populations, exposures, interventions, comparators, or outcomes. The degree and importance of these differences will vary according to context, such as the key considerations for a guideline or drafted systematic review. The "intended target application" may also be referred to as the "question of interest".

          Related terms used by others for "directness" include "relevance", "external validity", and "applicability". Directness is the extent to which the results of an experimental or observational study apply to a target context outside of that study.

          The term Indirectness is not used to describe indirect comparisons created by chaining a series of direct comparisons, such as those used for generating indirect effect estimates in network meta-analyses.

          Indirectness in population

          Re-opened for vote

          Indirectness related to the population.

          Indirectness in population means there are differences between the populations in the relevant research studies and the population under consideration in a question of interest. Decisions regarding indirectness of patients or populations may depend on an understanding of whether biological or social factors are sufficiently different that one might expect substantial differences in the magnitude of effect. The question of interest may vary with context, such as the key considerations for a guideline or systematic review.

          Indirectness in exposure

          Received negative votes -vote closed pending discussion next week

          Indirectness related to the intervention(s) or exposure(s).

          • Indirectness in intervention

          Indirectness in exposure means there are differences between the interventions or exposures in the relevant research studies and the interventions or exposures under consideration in a question of interest.

          Decisions regarding indirectness of interventions or exposures may depend on an understanding of whether intensity, setting, or social factors are sufficiently different that one might expect substantial differences in the magnitude of effect. The question of interest may vary with context, such as the key considerations for a guideline or systematic review.

           

          The Project Management Working Group prepared the agenda for the week of April 1-5, 2024.

          Day/Time (Eastern)

          Working Group

          Agenda Items

          Monday 8-9 am

          Project Management

          FHIR changes and EBMonFHIR Implementation Guide issues

          Monday 9-10 am

          Setting the Scientific Record on FHIR

          Create Examples of Common Data Elements and Comparison View

          Monday 10-11 am

          CQL Development (a CDS EBMonFHIR sub-WG)

          Review article on intermediate query format

          Monday 2-3 pm

          Statistic Terminology

          Review SEVCO terms (4 area under the curve terms open for vote)

          Tuesday 9 am-10 am

          Measuring the Rate of Scientific Knowledge Transfer

          Write Instructions for Evaluating Article Citations

          Tuesday 2-3 pm

          StatisticsOnFHIR (a CDS EBMonFHIR sub-WG)

          Create Examples of Common Data Elements and/or EBM IG Profile development

          Tuesday 3-4 pm

          Ontology Management

          Discuss protocol for translation of GRADE Ontology Terms, STATO status

          Wednesday 8-9 am

          Funding the Ecosystem Infrastructure

          Create product launch readiness checklist for making guidelines computable

          Wednesday 9-10 am

          Communications (Awareness, Scholarly Publications)

          Publications -GRADE Ontology concept paper, Presentations

          Thursday 8-9 am

          EBM Implementation Guide (a CDS EBMonFHIR sub-WG)

          Create Baseline Measure Report Profile of Composition Resource

          Thursday 9-10 am

          Computable EBM Tools Development

          Review progress with Summary of Findings authoring and viewing tools

          Friday 9-10 am

          Risk of Bias Terminology

          Review SEVCO terms (3 study selection bias terms open for vote)

          Friday 10-11 am

          GRADE Ontology

          Term development (1 Indirectness term open for vote)

          Friday 12-1 pm

          Project Management

          Prepare Weekly Agenda

          Releases on the FEvIR Platform:

          Computable Publishing®: Fast Evidence Interoperability Resources (FEvIR) Platform is available for use now, but is “pre-release”.  The current version is 0.210.0 (March 28, 2024). Viewing resources is open without login.  Signing in is free and required to create content (which can then only be edited by the person who created the content).

          Release 0.210.0 (March 28, 2024) identifies FEvIR Object Identifiers (FOIs) for any referenced Resources using FEvIR Linking Identifiers on initial loading or updating of Resources created by Conversion Tools (such as Computable Publishing®: GRADEpro-to-FEvIR Converter), to enable function support for multiple feature throughout the FEvIR Platform.

          Computable Publishing®: Summary Of Findings Authoring Tool version 0.20.0 (March 29, 2024) creates and displays a SummaryOfFindings Profile of Composition Resource.

          Release 0.19.0 (March 29, 2024) relabels 'Generate Table Row Content' buttons to 'Regenerate Table Row' buttons, fixes a bug blocking editing a table cell after closing the modal window for editing a table cell, fixes a bug blocking adding a new table row after closing the modal window for adding a new table row, and adds supporting information denoting a section title is required when adding a new table row.

          Release 0.20.0 (March 29, 2024) adds a 'Regenerate Table Content' button to apply the 'Regenerate Table Row' button function to all table rows at once.

           

          Quote for thought: "If it's your job to eat a frog, it's best to do it first thing in the morning. And If it's your job to eat two frogs, it's best to eat the biggest one first." -- Mark Twain

          Joanne Dehnbostel

          unread,
          Apr 1, 2024, 9:26:14 AM4/1/24
          to Health Evidence Knowledge Accelerator (HEvKA), Health Evidence Knowledge Accelerator (HEvKA) Weekly Update

           

          18 people (BA, BK, CA-D, CE , CM, HL, IK, JB, JD, KR, KS, KW, MA, PW, RC, SL, SL , SM) from 10 countries (Belgium, Brazil, Canada, Chile/Spain, China, Finland, India, Peru, UK, USA) participated this week in up to 14 active working group meetings.

          30 people (AI, AN, BA, BK, CA-D, CE , CM, CW, GL, HK, HL, IK, JB, JD, JO, JW, KR, KS, KW, MA, MH, PR-S, PW, RC, RL, SL, SL , SM, SS, TD) from 14 countries (Belgium, Brazil, Canada, Chile/Spain, China, Finland, Germany, India, Norway, Peru, Poland, Taiwan, UK, USA) participated this month in up to 59 active working group meetings.

          Project Coordination Updates:

          On March 25, the Project Management Working Group corrected some issues with the Computable Publishing Net Effect Calculator. We also discussed methods for calculating absolute risk reduction from a meta-analysis  with a confidence interval https://www.bmj.com/content/381/bmj-2022-073141.

          On March 25,the Setting the Scientific Record on FHIR Working Group modified the example of a common data element (CDE) collective EvidenceVariable Resource to more accurately represent the answer choice values (EvidenceVariable.category.valueCodeableConcept) as used in the source CDE EvidenceVariable Resources. 

          On March 27, the Communications Working Group worked to make minor revisions to "Making Science Computable: Introducing Evidence-Based Medicine on Fast Healthcare Interoperability Resources (EBMonFHIR)" which has been accepted for publication with the Journal of Medical Internet Research (JMIR). 

          On March 29, the Project Management Working Group prepared the agenda for the week of April 1-5, 2024.

          Day/Time (Eastern)

          Working Group

          Agenda Items

          Monday 8-9 am

          Project Management

          FHIR changes and EBMonFHIR Implementation Guide issues

          Monday 9-10 am

          Setting the Scientific Record on FHIR

          Create Examples of Common Data Elements and Comparison View

          Monday 10-11 am

          CQL Development (a CDS EBMonFHIR sub-WG)

          Review article on intermediate query format

          Monday 2-3 pm

          Statistic Terminology

          Review SEVCO terms (4 area under the curve terms open for vote)

          Tuesday 9 am-10 am

          Measuring the Rate of Scientific Knowledge Transfer

          Write Instructions for Evaluating Article Citations

          Tuesday 2-3 pm

          StatisticsOnFHIR (a CDS EBMonFHIR sub-WG)

          Create Examples of Common Data Elements and/or EBM IG Profile development

          Tuesday 3-4 pm

          Ontology Management

          Discuss protocol for translation of GRADE Ontology Terms, STATO status

          Wednesday 8-9 am

          Funding the Ecosystem Infrastructure

          Create product launch readiness checklist for making guidelines computable

          Wednesday 9-10 am

          Communications (Awareness, Scholarly Publications)

          Publications -GRADE Ontology concept paper, Presentations

          Thursday 8-9 am

          EBM Implementation Guide (a CDS EBMonFHIR sub-WG)

          Create Baseline Measure Report Profile of Composition Resource

          Thursday 9-10 am

          Computable EBM Tools Development

          Review progress with Summary of Findings authoring and viewing tools

          Friday 9-10 am

          Risk of Bias Terminology

          Review SEVCO terms (3 study selection bias terms open for vote)

          Friday 10-11 am

          GRADE Ontology

          Term development (1 Indirectness term open for vote)

          Friday 12-1 pm

          Project Management

          Prepare Weekly Agenda

           

          SEVCO Updates:

          On March 25, the Statistic Terminology Working Group found that the two terms open for vote last week (c-statistic, partial area under the ROC curve) did not receive enough votes to be added to the code system. They are still open for vote. The group then worked to draft definitions for two additional area under the curve terms (area under the value-by-time curve, area under the precision-recall curve) which are now also open for vote. The group then decided to remove three terms (measurement value, duration, and time to event) from the code system and began discussing the definition for absolute value, this discussion will continue next week.  There are currently 4 statistic terms open for vote. 

          Term

          Definition

          Alternative Term (if any)

          Comment for Application

          c-statistic

          Area under the ROC curve calculated with the full range of possible values for true positive rate and false positive rate.

          Area under the ROC curve is defined as an area under the curve where the curve is the true positive rate and the range of interest is the false positive rate. ROC stands for Receiver Operating Characteristic. The area under the ROC curve is used to assess the performance of a classifier used to distinguish between two or more groups. Another term for true positive rate is sensitivity and another term for false positive rate is 1-sensitivity.

          Area under the ROC curve calculated with a specified portion the full range of possible values for true positive rate and false positive rate.

          Area under the ROC curve is defined as an area under the curve where the curve is the true positive rate and the range of interest is the false positive rate. ROC stands for Receiver Operating Characteristic. The area under the ROC curve is used to assess the performance of a classifier used to distinguish between two or more groups. Another term for true positive rate is sensitivity and another term for false positive rate is 1-sensitivity.

          An area under the curve where the curve is the repeated measures of a variable over time and the domain of interest is time.

          The area under the value-by-time curve is used for pharmacokinetics, pharmacodynamics, and physiological monitoring.

          area under the precision-recall curve

          An area under the curve where the curve is the precision and the domain of interest is the recall.

          • AUPRC
          • AUCPR
          • AUC-PR

          In information retrieval, recall is a synonym for sensitivity and precision is a synonym for positive predictive value.

          On March 29, The Risk of Bias Terminology Working Group found that each of the 3 terms open for vote last week received a negative vote. Two of the terms, study eligibility not adhered to and error in study selection not minimized, were renamed as shown below, in response to feedback from the expert working group. The terms have been revised and are reopen for vote. If you have already voted on these terms, please vote again. There are currently 3 terms open for vote.

          Term

          Definition

          Alternative Terms
          (if any)

          Comment for Application
          (if any)

          publication bias

          A study selection bias in which the publicly available studies are not representative of all conducted studies.

          The term 'studies' means evidence or research results in any form where such studies would meet the study eligibility criteria without consideration of criteria regarding the form of publication. The phrase 'publicly available studies' means the studies are available to the broad academic community and the public through established distribution channels in any form, including forms with restricted access. Established distribution channels include peer-reviewed journals, books, conference proceedings, dissertations, reports by governmental or research organizations, preprints, and study registries.

          A study selection bias due to incorrect implementation of the study inclusion and exclusion criteria.

          • nonadherence to study eligibility criteria

          A study selection bias due to an inadequate process for screening and evaluating potentially eligible studies.

          An adequate process for screening and evaluating potentially eligible studies should generally include at least two independent reviewers for any steps that involve subjective judgment.

           

          HL7 Standards Development Updates:

          On March 25, the CQL Development Working Group (a CDS EBMonFHIR sub-WG) worked to peer review an article on intermediate query format after receiving a request from a journal editor. 

          On March 28, the EBM Implementation Guide Working Group  (a CDS EBMonFHIR sub-WG) discussed a question posed by an implementer who needs to document the source for the Evidence content, including a quote of the exact phrasing in the source material. We created Jira ticket https://jira.hl7.org/browse/FHIR-45094 to add a comment to Evidence.relatedArtifact (and example) to explain how to express the source of extracted evidence.

          On March 26, the StatisticsOnFHIR Working Group (a CDS EBMonFHIR sub-WG) introduced new participants to the work we are doing to respond to a Request For Information (RFI) from NIH https://nexus.od.nih.gov/all/2024/02/28/seeking-ideas-on-using-common-data-elements-for-nih-supported-clinical-research/. This response will include examples of Common Data Elements. 

           

          Research Development Updates:

          On March 26, the Board of the Scientific Knowledge Accelerator Foundation met today. The main agenda items were strategies for fundraising, increasing public awareness of our mission, and answering a survey from Guidelines International Network. 

           

          Knowledge Ecosystem Liaison/Coordination Updates:

          On March 26, the Ontology Management Working Group met again to discuss our protocol to translate GRADE terms from the GRADE Ontology project into other languages. We decided to limit our first efforts to translating the term itself and any alternative terms (synonyms) and worked on the layout of the spreadsheet we will use to collect the data. This discussion will continue in next week's Ontology Management meeting. 

          On March 27, the Funding the Ecosystem Infrastructure Working Group created a product launch readiness checklist for making guidelines computable which can be viewed here https://docs.google.com/document/d/1FcLL3Tkx87tpcmOmv-0tpKeWxI5VS8uS/edit

          On March 29, On March 29, the GRADE Ontology Working Group discussed a GRADE concept paper to be written in time for presentation at the GRADE Working Group Meeting in Miami, FL on May 1. The group then found that of the three terms open for vote last week, the parent term, Indirectness, received 13 positive votes, so it has been accepted into the code system. Indirectness in population received 11 votes with one negative vote and 4 comments, Indirectness in Exposure received 11 votes with 3 negative votes and 5 comments. The group revised the term Indirectness in population which has been reopened for vote, if you previously voted on this term please vote again. The term Indirectness in Exposure will be discussed and likely reopened for vote at next week's meeting. The meeting was recorded, please request a copy of the recording if interested. 

          The group will continue our discussion regarding the translation of terms into additional languages in our Ontology Management Working Group on Tuesday from 3 to 4 pm Eastern time. 

          Term and results of voting

          Definition

          Alternative terms if any

          Comment for Application

          Indirectness

          Passed

          Differences between the characteristics of the evidence and the intended target application.

          Indirectness is one of the domains that can impact the rating of the certainty of evidence. Differences between the characteristics of the evidence and the intended target application might include setting, populations, exposures, interventions, comparators, or outcomes. The degree and importance of these differences will vary according to context, such as the key considerations for a guideline or drafted systematic review. The "intended target application" may also be referred to as the "question of interest".

          Related terms used by others for "directness" include "relevance", "external validity", and "applicability". Directness is the extent to which the results of an experimental or observational study apply to a target context outside of that study.

          The term Indirectness is not used to describe indirect comparisons created by chaining a series of direct comparisons, such as those used for generating indirect effect estimates in network meta-analyses.

          Indirectness in population

          Re-opened for vote

          Indirectness related to the population.

          Indirectness in population means there are differences between the populations in the relevant research studies and the population under consideration in a question of interest. Decisions regarding indirectness of patients or populations may depend on an understanding of whether biological or social factors are sufficiently different that one might expect substantial differences in the magnitude of effect. The question of interest may vary with context, such as the key considerations for a guideline or systematic review.

          Indirectness in exposure

          Received negative votes -vote closed pending discussion next week

          Indirectness related to the intervention(s) or exposure(s).

          • Indirectness in intervention

          Indirectness in exposure means there are differences between the interventions or exposures in the relevant research studies and the interventions or exposures under consideration in a question of interest.

          Decisions regarding indirectness of interventions or exposures may depend on an understanding of whether intensity, setting, or social factors are sufficiently different that one might expect substantial differences in the magnitude of effect. The question of interest may vary with context, such as the key considerations for a guideline or systematic review.

           

          FEvIR Platform and Tools Development Updates:

          Computable EBM Tools Development Working Group reviewed progress across Composition authoring and viewing tools and discussed priorities for future developments. 

           

          Releases on the FEvIR Platform:

          Computable Publishing®: Fast Evidence Interoperability Resources (FEvIR) Platform is available for use now, but is “pre-release”.  The current version is 0.210.0 (March 28, 2024). Viewing resources is open without login.  Signing in is free and required to create content (which can then only be edited by the person who created the content).

          Release 0.209.0 (March 25, 2024) displays a line break when a single \n is found in a markdown datatype, and displays a hyperlink when a canonical datatype value is found within a RelatedArtifact datatype.

          Release 0.210.0 (March 28, 2024) identifies FEvIR Object Identifiers (FOIs) for any referenced Resources using FEvIR Linking Identifiers on initial loading or updating of Resources created by Conversion Tools (such as Computable Publishing®: GRADEpro-to-FEvIR Converter), to enable function support for multiple feature throughout the FEvIR Platform.

          Computable Publishing®: Summary Of Findings Authoring Tool version 0.20.0 (March 29, 2024) creates and displays a SummaryOfFindings Profile of Composition Resource.

          Release 0.18.0 (March 27, 2024) identifies FEvIR Object Identifiers (FOIs) for any referenced Resources using FEvIR Linking Identifiers, to enable function support for content creation (such as the Generate Table Row Content button to automatically generate section Narrative content from the referenced Resources) for Summary Of Finding Compositions created from conversion tools (such as Computable Publishing®: GRADEpro-to-FEvIR Converter).

          Release 0.19.0 (March 29, 2024) relabels 'Generate Table Row Content' buttons to 'Regenerate Table Row' buttons, fixes a bug blocking editing a table cell after closing the modal window for editing a table cell, fixes a bug blocking adding a new table row after closing the modal window for adding a new table row, and adds supporting information denoting a section title is required when adding a new table row.

          Release 0.20.0 (March 29, 2024) adds a 'Regenerate Table Content' button to apply the 'Regenerate Table Row' button function to all table rows at once.

          Computable Publishing®: ClinicalTrials.gov-to-FEvIR Converter version 4.11.3 (March 28, 2024)  converts ClinicalTrials.gov JSON to FEvIR Resources in FHIR JSON.

          Release 4.11.3 (March 28, 2024) corrects the ResearchStudy.result value from an object to an array.

          Computable Publishing® GRADEpro-to-FEvIR Converter  Version 0.3.1 (March 28, 2024) converts structured data from GRADEpro to FHIR JSON.

          Release 0.3.1 (March 28, 2024) adds an artifactReference element referencing the Composition Resource (Recommendation Profile) to the ArtifactAssessment Resource (RecommendationJustification Profile).

           

          Quotes for Thought:

          "I say luck is when an opportunity comes along and you're prepared for it." --- Denzel Washington

          "The more things change, the more they are the same." -- Alphonse Karr

          "Anyone who has never made a mistake has never tried anything new." --Albert Einstein

          "Give me a lever long enough and a fulcrum on which to place it, and I shall move the world." — Archimedes

          "If it's your job to eat a frog, it's best to do it first thing in the morning. And If it's your job to eat two frogs, it's best to eat the biggest one first." -- Mark Twain

           

           

          To get involved or stay informed: HEvKA Project Page on FEvIR Platform, HEvKA Project Page on HL7 Confluence, or join any of the groups that are now meeting in the following weekly schedule:

          Joanne Dehnbostel

          unread,
          Apr 2, 2024, 1:44:47 PM4/2/24
          to Health Evidence Knowledge Accelerator (HEvKA)

           

           

          7 people (BA, CE , HL, JD, KS, W, MA) participated today in up to 4 active working group meetings.

          The Project Management Working Group worked on FHIR trackers including those already applied and brought the FHIR Tracker List up to date.

          The Setting the Scientific Record on FHIR Working Group worked to test and improve the Summary of Findings Authoring tool on the FEvIR Platform.

          The CQL Development Working Group (a CDS EBMonFHIR sub-WG) continued to peer review an article on intermediate query format after receiving a request from a journal editor.

          The group then started to create a list of Common Data Elements which will be kept in a Google folder before being added to a response to a Request for Information (RFI) from the NIH.

          The Statistics Terminology Working Group found that the term, area under the precision recall curve, received enough votes to be added to the code system. The group then decided that the term, C statistic, should be absorbed into the term, Area under the ROC curve, instead of being a separate term. Partial area under the ROC curve, became a direct child of area under the curve, instead of area under the ROC curve, and area under the value-by-time curve, changed alternative terms.

          The group then drafted the definition for the term, absolute value, and started to draft a definition for the term, quantile (they will continue the discussion of quantile in next week's meeting). There are currently 4 statistic terms open for vote.

          Term

          Definition

          Alternative Term(s) if any

          Comment For Application if any

          absolute value

          A statistic that represents the distance of a value from zero.

          The | symbol is used around the value to denote the absolute value, e.g. |x|, such that if x = -3, then |x| = 3.

          An area under the curve where the curve is the true positive rate and the range of interest is the false positive rate.

            • AUC
            • AUROC
            • area under the receiver operating characteristic curve
            • c-statistic
            • C-statistic
            • Harrell's C
            • concordance index
            • concordance statistic

            ROC stands for Receiver Operating Characteristic. The area under the ROC curve is used to assess the performance of a classifier used to distinguish between two or more groups. Another term for true positive rate is sensitivity and another term for false positive rate is 1-sensitivity.

            The c-statistic is the area under the ROC curve calculated with the full range of possible values for true positive rate and false positive rate. Another interpretation of the c-statistic is similar without explicitly referencing the ROC curve: "The C statistic is the probability that, given 2 individuals (one who experiences the outcome of interest and the other who does not or who experiences it later), the model will yield a higher risk for the first patient than for the second. It is a measure of concordance (hence, the name “C statistic”) between model-based risk estimates and observed events. C statistics measure the ability of a model to rank patients from high to low risk but do not assess the ability of a model to assign accurate probabilities of an event occurring (that is measured by the model’s calibration). C statistics generally range from 0.5 (random concordance) to 1 (perfect concordance)." (JAMA. 2015;314(10):1063-1064. doi:10.1001/jama.2015.11082)

            An area under the curve where the curve is the true positive rate and the range of interest is a specified portion of the range of possible values for the false positive rate.

            Area under the ROC curve is defined as an area under the curve where the curve is the true positive rate and the range of interest is the false positive rate. ROC stands for Receiver Operating Characteristic. The area under the ROC curve is used to assess the performance of a classifier used to distinguish between two or more groups. Another term for true positive rate is sensitivity and another term for false positive rate is 1-sensitivity.

            area under the value-by-time curve

            An area under the curve where the curve is the repeated measures of a variable over time and the domain of interest is time.

            • area under the value-time curve
            • area under the value vs. time curve
            • area under the value versus time curve

            The area under the value-by-time curve is used for pharmacokinetics, pharmacodynamics, and physiological monitoring.

             

            The HL7 Learning Health Systems (LHS) Working Group  discussed the upcoming Evidence Based Medicine Track at the May 2024 HL7 Connectathon 2024 - 05 Evidence Based Medicine

            Releases on the FEvIR Platform:

            Computable Publishing®: Summary Of Findings Authoring Tool version 0.21.0 (April 1, 2024) creates and displays a SummaryOfFindings Profile of Composition Resource.SoF Authoring Tool

            Release 0.21.0 (April 1, 2024) removes ability to change the Relative Importance ratings from the Table View to ensure consistent processing with changing the ratings using the Rate Relative Importance button.

            Quote for thought: "I have great faith in fools — self-confidence, my friends call it." --Edgar Allen Poe

            Joanne Dehnbostel

            unread,
            Apr 3, 2024, 12:13:35 PM4/3/24
            to Health Evidence Knowledge Accelerator (HEvKA)

             

             

            9 people (BA, HL, JD, KR, KS, KW, MA, RC, SM) participated today in up to 3 active working group meetings.

            The Measuring the Rate of Scientific Knowledge Transfer Working Group wrote instructions for evaluating article citations to determine whether the citing article is systematically derived, intended to guide clinical practice, and whether the data is incorporated into the results of the citing paper. These instructions will guide data entry for our project.

            The StatisticsOnFHIR Working Group (a CDS EBMonFHIR sub-WG) reviewed the Computable Publishing®: Summary Of Findings Authoring Tool  and discussed best practices for the statistical calculations. 

            The Ontology Management Working Group - continued to discuss the protocol for translation of GRADE Ontology Terms. Each language will have a named coordinator who will administrate the data entry for that language. We have volunteers to coordinate French, Spanish, and Portuguese languages so far. We now have a working spreadsheet and will be writing instructions for the data entry into that spreadsheet to ensure that the data entry is consistent between languages. This discussion will continue next week.

            Quote for Thought: "It is human nature to think wisely and act foolishly."- Anatole France

            Joanne Dehnbostel

            unread,
            Apr 4, 2024, 7:54:42 AM4/4/24
            to Health Evidence Knowledge Accelerator (HEvKA)

             

             

            7 people (BA, CE , JD, KR, KS, KW, MA) participated today in up to 2 active working group meetings.

            The Funding the Ecosystem Infrastructure working group continued to create a product launch readiness checklist for making guidelines computable which can be viewed here

            The Communications Working Group made last minute changes to "Making Science Computable: Introducing Evidence-Based Medicine on Fast Healthcare Interoperability Resources (EBMonFHIR)which has been accepted for publication with the Journal of Medical Internet Research (JMIR). The group then turned their attention to the GRADE concept paper which is being written by the GRADE Ontology working group.

            Quote for Thought: "We’re fools whether we dance or not, so we might as well dance."- Japanese Proverb

            Joanne Dehnbostel

            unread,
            Apr 8, 2024, 11:06:17 AM4/8/24
            to Health Evidence Knowledge Accelerator (HEvKA), Health Evidence Knowledge Accelerator (HEvKA) Weekly Update

             

             

            Project Coordination Updates:

            16 people (BA, CA-D, CE , HK, HL, IK, JB, JD, KR, KS, KW, MA, PW, RC, SM, TD) from 8 countries (Belgium, Canada, Chile/Spain, Finland, Norway, Peru, UK, USA) participated this week in up to 14 active working group meetings.

             

            Announcement:

            A new commentary by Brian Alper has been published in the Guidelines International Network Journal, Clinical and Public Health Guidelines. The commentary can be viewed here. A FHIR Citation Resource for the commentary can be viewed here.

            On April 1, the Project Management Working Group worked on FHIR trackers including those already applied and brought the FHIR Tracker List up to date.

            On April 3 , the Communications Working Group made last minute changes to "Making Science Computable: Introducing Evidence-Based Medicine on Fast Healthcare Interoperability Resources (EBMonFHIR)which has been accepted for publication with the Journal of Medical Internet Research (JMIR). The group then turned their attention to the GRADE concept paper which is being written by the GRADE Ontology working group.

            On April 5, the Project Management Group prepared the weekly agenda for next week.

            Day/Time (Eastern)

            Working Group

            Agenda Items

            Monday 8-9 am

            Project Management

            FHIR changes and EBMonFHIR Implementation Guide issues

            Monday 9-10 am

            Setting the Scientific Record on FHIR

            Create RFI Response for Common Data Elements

            Monday 10-11 am

            CQL Development (a CDS EBMonFHIR sub-WG)

            Evaluate CCDL as intermediate query format

            Monday 2-3 pm

            Statistic Terminology

            Review SEVCO terms (4 terms open for vote)

            Tuesday 9 am-10 am

            Measuring the Rate of Scientific Knowledge Transfer

            Review Instructions for Evaluating Article Citations

            Tuesday 2-3 pm

            StatisticsOnFHIR (a CDS EBMonFHIR sub-WG)

            Create RFI Response for Common Data Elements and/or EBM IG Profile development or Net effect calculations

            Tuesday 3-4 pm

            Ontology Management

            Advance protocol for translation of GRADE Ontology Terms

            Wednesday 8-9 am

            Funding the Ecosystem Infrastructure

            Discuss product launch readiness checklist for making guidelines computable - Identify developers

            Wednesday 9-10 am

            Communications (Awareness, Scholarly Publications)

            Publications -GRADE Ontology concept paper

            Thursday 8-9 am

            EBM Implementation Guide (a CDS EBMonFHIR sub-WG)

            Create Participant Flow Report Profile of Composition Resource

            Thursday 9-10 am

            Computable EBM Tools Development

            Review progress with Converter tools

            Friday 9-10 am

            Risk of Bias Terminology

            Review SEVCO terms (3 study selection bias terms open for vote)

            Friday 10-11 am

            GRADE Ontology

            Term development (2 Indirectness terms open for vote)

            Friday 12-1 pm

            Project Management

            Prepare Weekly Agenda

             

             

            SEVCO Updates:

            On April 1, the Statistics Terminology Working Group found that the term, area under the precision recall curve, received enough votes to be added to the code system. The group then decided that the term, C statistic, should be absorbed into the term, Area under the ROC curve, instead of being a separate term. Partial area under the ROC curve, became a direct child of area under the curve, instead of area under the ROC curve, and area under the value-by-time curve, changed alternative terms.

            The group then drafted the definition for the term, absolute value, and started to draft a definition for the term, quantile (they will continue the discussion of quantile in next week's meeting). There are currently 4 statistic terms open for vote.

            Term

            Definition

            Alternative Term(s) if any

            Comment For Application if any

            absolute value

            A statistic that represents the distance of a value from zero.

            The | symbol is used around the value to denote the absolute value, e.g. |x|, such that if x = -3, then |x| = 3.

            area under the ROC curve

            An area under the curve where the curve is the true positive rate and the range of interest is the false positive rate.

              • AUC
              • AUROC
              • area under the receiver operating characteristic curve
              • c-statistic
              • C-statistic
              • Harrell's C
              • concordance index
              • concordance statistic

                ROC stands for Receiver Operating Characteristic. The area under the ROC curve is used to assess the performance of a classifier used to distinguish between two or more groups. Another term for true positive rate is sensitivity and another term for false positive rate is 1-sensitivity.


                The c-statistic is the area under the ROC curve calculated with the full range of possible values for true positive rate and false positive rate. Another interpretation of the c-statistic is similar without explicitly referencing the ROC curve: "The C statistic is the probability that, given 2 individuals (one who experiences the outcome of interest and the other who does not or who experiences it later), the model will yield a higher risk for the first patient than for the second. It is a measure of concordance (hence, the name “C statistic”) between model-based risk estimates and observed events. C statistics measure the ability of a model to rank patients from high to low risk but do not assess the ability of a model to assign accurate probabilities of an event occurring (that is measured by the model’s calibration). C statistics generally range from 0.5 (random concordance) to 1 (perfect concordance)." (JAMA. 2015;314(10):1063-1064. doi:10.1001/jama.2015.11082)

                partial area under the ROC curve

                An area under the curve where the curve is the true positive rate and the range of interest is a specified portion of the range of possible values for the false positive rate.

                Area under the ROC curve is defined as an area under the curve where the curve is the true positive rate and the range of interest is the false positive rate. ROC stands for Receiver Operating Characteristic. The area under the ROC curve is used to assess the performance of a classifier used to distinguish between two or more groups. Another term for true positive rate is sensitivity and another term for false positive rate is 1-sensitivity.

                area under the value-by-time curve

                An area under the curve where the curve is the repeated measures of a variable over time and the domain of interest is time.

                • area under the value-time curve
                • area under the value vs. time curve
                • area under the value versus time curve

                  The area under the value-by-time curve is used for pharmacokinetics, pharmacodynamics, and physiological monitoring.

                  On April 5, the Risk of Bias Terminology Working Group found that none of the 3 terms open for vote last week received enough votes to be added to the code system. Two of these terms (publication bias and bias in study selection process) are unchanged and remain open for vote. If you have already voted on these two terms you do not need to vote again.

                  One of these terms (misapplication of study eligibility criteria) was discussed, the definition was revised, and it was re-opened for vote. Please vote again on this term. There are currently 3 risk of bias terms open for vote.  

                  Term

                  Definition

                  Alternative Terms
                  (if any)

                  Comment for application
                  (if any)

                  publication bias

                  A study selection bias in which the publicly available studies are not representative of all conducted studies.

                  The term 'studies' means evidence or research results in any form where such studies would meet the study eligibility criteria without consideration of criteria regarding the form of publication. The phrase 'publicly available studies' means the studies are available to the broad academic community and the public through established distribution channels in any form, including forms with restricted access. Established distribution channels include peer-reviewed journals, books, conference proceedings, dissertations, reports by governmental or research organizations, preprints, and study registries.

                  A study selection bias due to inappropriate implementation of the study inclusion and exclusion criteria.

                  • nonadherence to study eligibility criteria

                  bias in study selection process

                  A study selection bias due to an inadequate process for screening and evaluating potentially eligible studies.

                  An adequate process for screening and evaluating potentially eligible studies should generally include at least two independent reviewers for any steps that involve subjective judgment.

                  To participate you can join the Scientific Evidence Code System (SEVCO) Expert Working Group at https://fevir.net/resources/Project/27845.

                   

                  HL7 Standards Development Updates:

                  On April 1, the CQL Development Working Group (a CDS EBMonFHIR sub-WG) continued to peer review an article on intermediate query format after receiving a request from a journal editor.

                  The group then started to create a list of Common Data Elements which will be kept in a Google folder before being added to a response to a Request for Information (RFI) from the NIH.

                  On April 4, the EBM Implementation Guide Working Group (a CDS EBMonFHIR sub-WG) created a Baseline Measure Report Profile which was added to the EBMonFHIR Implementation guide.

                   

                  On April 1, the HL7 Learning Health Systems (LHS) Working Group  discussed the upcoming Evidence Based Medicine Track at the May 2024 HL7 Connectathon 2024 - 05 Evidence Based Medicine

                   

                  On April 2, the StatisticsOnFHIR Working Group (a CDS EBMonFHIR sub-WG) reviewed the Computable Publishing®: Summary Of Findings Authoring Tool  and discussed best practices for the statistical calculations. 

                   

                   

                  Research Development Updates:

                  On April 2, the Measuring the Rate of Scientific Knowledge Transfer Working Group wrote instructions for evaluating article citations to determine whether the citing article is systematically derived, intended to guide clinical practice, and whether the data is incorporated into the results of the citing paper. These instructions will guide data entry for our project.

                   

                  Knowledge Ecosystem Liaison/Coordination Updates:

                  On April 2, the Ontology Management Working Group - continued to discuss the protocol for translation of GRADE Ontology Terms. Each language will have a named coordinator who will administrate the data entry for that language. We have volunteers to coordinate French, Spanish, and Portuguese languages so far. We now have a working spreadsheet and will be writing instructions for the data entry into that spreadsheet to ensure that the data entry is consistent between languages. This discussion will continue next week.

                  On April 3, the Funding the Ecosystem Infrastructure working group continued to create a product launch readiness checklist for making guidelines computable which can be viewed here

                  On April 5, the GRADE Ontology Working Group found that the one term open for vote last week (Indirectness in population) received 8 affirmative votes and one negative. The group revised the comment for application and re-opened the term for vote. A second term which had been open for vote the previous week (Indirectness in exposure) and received 11 votes with 3 negative votes and 5 comments was also re-opened for vote after revision. There are currently two terms open for vote, the terms are shown in the table below. Today’s meeting was recorded and can be viewed here.

                  The group has also created a draft manuscript of the GRADE Ontology concept paper for approval at the GRADE Working Group Meeting in Miami, May 1, 2024. This manuscript has been sent to the GRADE Ontology working group for editing.

                  Note: This Tuesday’s HEvKA Ontology (3-4 PM Eastern Time) meeting will include discussion and development of a protocol to create and add translations of the GRADE terms to represent multiple languages. We want this process to be transparent and consistent, so a protocol is needed. Please join us if you can.  Comments via email are also welcome.

                   

                  Term

                  Definition

                  Alternative Terms
                  (if any)

                  Comment for application
                  (if any)

                  Indirectness in population

                  Differences between the population-related characteristics of the evidence and the intended target application.

                  Indirectness in population means there are differences between the populations in the relevant research studies and the population under consideration in a question of interest.

                  Differences in population-related characteristics (such as disease severity, social factors, or other determinants of health) may result in differences in the estimates derived from the evidence and those that would be seen in the intended target application.



                  The question of interest may vary with context, such as the key considerations for a guideline or systematic review.

                  Indirectness in exposure

                  Differences between the exposure-related characteristics of the evidence and the intended target application.

                  • Indirectness in intervention

                  Indirectness in exposure means there are differences between the interventions or exposures in the relevant research studies and the interventions or exposures under consideration in a question of interest.

                  Differences in exposure-related characteristics (such as route of administration, intensity, setting, or timing) may result in differences in the estimates derived from the evidence and those that would be seen in the intended target application.



                  The question of interest may vary with context, such as the key considerations for a guideline or systematic review.

                  Please visit the term pages via the links in the table above and click the Comment button if you would like to share any comments that will be openly viewed by anyone visiting the page.  You may also click the Vote button to anonymously register your agreement or disagreement with this term.  If you vote ‘No’ you need to add a comment (along with your vote, not publicly shared with your name) explaining what change is needed to reach agreement.

                  To make voting on multiple terms easier you can use My Ballot at https://fevir.net/myballot -- You can go to My Ballot and mark any term with Yes if you approve, or with No (and add your comment to express what change is needed), or you can navigate to each term page to see additional detail.

                   

                  Additional Opportunity for Participation:

                  If you are interested in Risk of Bias Terminology, there is another project within HEvKA that may interest you --The SEVCO Risk of Bias Terminology Working Group You can join this project here. Just press the button that says, "Join the Group". The two projects will then both show up on your "My Ballot" screen but they are shown in separate tables so it will be easy to differentiate between projects. The Risk of Bias Terminology working group meets during the hour just before the GRADE Ontology working group on Friday at 9AM Eastern time (highlighted in blue on the schedule below).

                   

                  FEvIR Platform and Tools Development Updates:

                  On April 1, the Setting the Scientific Record on FHIR Working Group worked to test and improve the Summary of Findings Authoring tool on the FEvIR Platform.

                  On April 4, the Computable EBM Tools Development Working Group reviewed and improved the Summary Of Findings Authoring Tool on the FEvIR Platform. 

                   

                   

                  Releases on the FEvIR Platform:

                  Computable Publishing®: Fast Evidence Interoperability Resources (FEvIR) Platform is available for use now, but is “pre-release”.  The current version is 0.211.0 (April 5, 2024). Viewing resources is open without login.  Signing in is free and required to create content (which can then only be edited by the person who created the content).

                  Release 0.211.0 (April 5, 2024) uses 0 instead of 1 for the initial index number entry in the display of array values in the JSON Outline of the Text View.

                  Computable Publishing®: Summary Of Findings Authoring Tool version 0.21.0 (April 1, 2024) creates and displays a SummaryOfFindings Profile of Composition Resource.SoF Authoring Tool

                  Release 0.21.0 (April 1, 2024) removes ability to change the Relative Importance ratings from the Table View to ensure consistent processing with changing the ratings using the Rate Relative Importance button.

                  Release 0.22.0 displays the 'Generate Net Effect Report' button without dependency on  having relative importance ratings, changes the display of the 'Sample size' column header, reports "[Not calculable." if unable to obtain a numerical result for a net effect contribution value, and generates a Narrative summary (table cell display value) for all statistics found for the Effect Estimate (instead of using only the first statistic entry).

                  Computable Publishing®: ClinicalTrials.gov-to-FEvIR Converter version 4.11.3 (April 4, 2024)  converts ClinicalTrials.gov JSON to FEvIR Resources in FHIR JSON.

                  Release 4.12.0 (April 4, 2024) recognizes links to large documents for study protocols, statistical analysis plans, and informed consent forms and adds them as relatedArtifact entries to the ResearchStudy Resource.

                   

                   

                  Quotes for thought:

                  "I have great faith in fools — self-confidence, my friends call it." --Edgar Allen Poe

                  "It is human nature to think wisely and act foolishly."- Anatole France

                  "We’re fools whether we dance or not, so we might as well dance."- Japanese Proverb

                  "A fool flatters himself, a wise man flatters the fool."- Edward G. Bulwer-Lytton, English Writer

                  "Fool me once, strike one. Fool me twice, strike three." — Michael Scott, "The Office"

                  Joanne Dehnbostel

                  unread,
                  Apr 15, 2024, 10:53:58 AM4/15/24
                  to Health Evidence Knowledge Accelerator (HEvKA), Health Evidence Knowledge Accelerator (HEvKA) Weekly Update

                   

                   

                  Project Coordination Updates:

                  19 people (BA, CA-D, CE , HK, HL, IK, JB, JD, KR, KS, KW, MA, PW, RC, SL, SM, TD, XC, XZ) from 9 countries (Belgium, Canada, Chile/Spain, China, Finland, Norway, Peru, UK, USA) participated this week in up to 14 active working group meetings.

                  On Monday, April 8, the Project Management Working Group discussed the following FHIR trackers:

                  https://jira.hl7.org/browse/FHIR-45094

                  https://jira.hl7.org/browse/FHIR-41502

                  https://jira.hl7.org/browse/FHIR-42885

                  On Monday April 8, the Setting the Scientific Record on FHIR Working Group continued to discuss the RFI Response to the NIH for Common Data Elements, created a new EvidenceVariable example on the FEvIR Platform https://fevir.net/resources/EvidenceVariable/208211 and improved the pdf for collective view. 

                  On Wednesday, April 10, the Communications Working Group continued to discuss the need for a contributor extension that could be applied to any resource. The Evidence Based Medicine Implementation Guide (EBM IG) would handle this contributorship.

                  There is currently no code in the CRediT taxonomy -- see https://jats4r.org/credit-taxonomy for data entry.

                  https://build.fhir.org/valueset-artifact-contribution-type.html

                  The group decided that the following two new artifact contribution types should be added to the above artifact contribution types:

                  1) computable formatting

                  2) data entry

                   

                  On Friday, April 12, the Project Management Working Group created the suggested agenda for next week:

                  Day/Time (Eastern)

                  Working Group

                  Agenda Items

                  Monday 8-9 am

                  Project Management

                  FHIR changes and EBMonFHIR Implementation Guide issues

                  Monday 9-10 am

                  Setting the Scientific Record on FHIR

                  SRDR to FEvIR Review

                  Monday 10-11 am

                  CQL Development (a CDS EBMonFHIR sub-WG)

                  Evaluate the focus for this working group

                  Monday 2-3 pm

                  Statistic Terminology

                  Review SEVCO terms (5 terms open for vote)

                  Tuesday 9 am-10 am

                  Measuring the Rate of Scientific Knowledge Transfer

                  Review Initial Pilot Progress

                  Tuesday 2-3 pm

                  StatisticsOnFHIR (a CDS EBMonFHIR sub-WG)

                  Create RFI Response for Common Data Elements

                  Tuesday 3-4 pm

                  Ontology Management

                  Review Objectives and Progress

                  Wednesday 8-9 am

                  Funding the Ecosystem Infrastructure

                  Review progress on product launch readiness checklist for making guidelines computable

                  Wednesday 9-10 am

                  Communications (Awareness, Scholarly Publications)

                  GRADE Ontology concept paper, Study Design Paper, RFI for Common Data Elements

                  Thursday 8-9 am

                  EBM Implementation Guide (a CDS EBMonFHIR sub-WG)

                  GIN Tech Meeting

                  Thursday 9-10 am

                  Computable EBM Tools Development

                  Review progress with Converter tools

                  Friday 9-10 am

                  Risk of Bias Terminology

                  Review SEVCO terms (3 study selection bias terms open for vote)

                  Friday 10-11 am

                  GRADE Ontology

                  Term development (2 Indirectness terms open for vote)

                  Friday 12-1 pm

                  Project Management

                  Prepare Weekly Agenda

                   

                  SEVCO Updates:

                  On Monday April 8, the Statistic Terminology Working Group found that none of the 4 terms that were open for vote received enough votes to be accepted into the code system. The group worked to define one additional term, quantile. There are currently 5 terms open for vote. 

                  Term

                  Definition

                  Alternative Terms
                  (if any)

                  Comment for application
                  (if any)

                  A statistic that represents the distance of a value from zero.

                  The | symbol is used around the value to denote the absolute value, e.g. |x|, such that if x = -3, then |x| = 3.

                  quantile

                  A statistic that represents the value for which the number of data points at or below it constitutes a specific portion of the total number of data points.

                  Quantile is a type of statistic but not used without specification of the portion it represents. For example, one may report a fortieth percentile (40%ile) but one does not report a percentile without specification of which percentile.

                  area under the ROC curve

                  An area under the curve where the curve is the true positive rate and the range of interest is the false positive rate.

                  • AUC
                  • AUROC
                  • area under the receiver operating characteristic curve
                  • c-statistic
                  • C-statistic
                  • Harrell's C
                  • concordance index
                  • concordance statistic

                  ROC stands for Receiver Operating Characteristic. The area under the ROC curve is used to assess the performance of a classifier used to distinguish between two or more groups. Another term for true positive rate is sensitivity and another term for false positive rate is 1-sensitivity.

                  The c-statistic is the area under the ROC curve calculated with the full range of possible values for true positive rate and false positive rate. Another interpretation of the c-statistic is similar without explicitly referencing the ROC curve: "The C statistic is the probability that, given 2 individuals (one who experiences the outcome of interest and the other who does not or who experiences it later), the model will yield a higher risk for the first patient than for the second. It is a measure of concordance (hence, the name “C statistic”) between model-based risk estimates and observed events. C statistics measure the ability of a model to rank patients from high to low risk but do not assess the ability of a model to assign accurate probabilities of an event occurring (that is measured by the model’s calibration). C statistics generally range from 0.5 (random concordance) to 1 (perfect concordance)." (JAMA. 2015;314(10):1063-1064. doi:10.1001/jama.2015.11082)

                  partial area under the ROC curve

                  An area under the curve where the curve is the true positive rate and the range of interest is a specified portion of the range of possible values for the false positive rate.

                  Area under the ROC curve is defined as an area under the curve where the curve is the true positive rate and the range of interest is the false positive rate. ROC stands for Receiver Operating Characteristic. The area under the ROC curve is used to assess the performance of a classifier used to distinguish between two or more groups. Another term for true positive rate is sensitivity and another term for false positive rate is 1-sensitivity.

                  area under the value-by-time curve

                  An area under the curve where the curve is the repeated measures of a variable over time and the domain of interest is time.

                  • area under the value-time curve
                  • area under the value vs. time curve
                  • area under the value versus time curve

                  The area under the value-by-time curve is used for pharmacokinetics, pharmacodynamics, and physiological monitoring.

                   

                  On Friday April 12, the Risk of Bias Terminology Working Group reviewed the three terms open for voting last week. The term "misapplication of study eligibility criteria" passed and was added to the code system. The two remaining terms (publication bias, bias in study selection process) were revised and reopened for voting. An additional term which was already added to the code system (study selection bias) has a new alternative term and has been reopened for voting. There are currently three risk of bias terms on the ballot. If you have voted on these terms before, please vote again as they have been revised.

                  Term and Voting Status

                  Definition

                  Alternative Terms
                  (if any)

                  Comment for application
                  (if any)

                  misapplication of study eligibility criteria-passed, no longer open for vote

                  A study selection bias due to inappropriate implementation of the study inclusion and exclusion criteria.

                  • nonadherence to study eligibility criteria

                  study selection bias-added alternative term and reopened for vote

                  A selection bias resulting from factors that influence study selection, from methods used to include or exclude studies for evidence synthesis, or from differences between the study sample and the population of interest

                  • bias in study selection

                  publication bias-revised and reopened for vote

                  A study selection bias in which the publicly available studies are not representative of all conducted studies.

                  Publication bias arises from the failure to identify all studies that have been conducted, either published (i.e., publicly available) or unpublished. The term 'studies' means evidence or research results in any form where such studies would meet the study eligibility criteria without consideration of criteria regarding the form of publication. The phrase 'publicly available studies' means the studies are available to the broad academic community and the public through established distribution channels in any form, including forms with restricted access. Established distribution channels include peer-reviewed journals, books, conference proceedings, dissertations, reports by governmental or research organizations, preprints, and study registries.

                  Publication bias often leads to an overestimate in the effect in favor of the study hypothesis, because studies with statistically significant positive results are more likely to be publicly available.

                  bias in study selection process-revised and reopened for vote

                  A study selection bias due to an inadequate process for screening and/or evaluating potentially eligible studies.

                  An adequate process for screening and evaluating potentially eligible studies should generally include at least two independent reviewers for any steps that involve subjective judgment. Any step involving subjective judgment may introduce systematic distortions into the research findings.

                   

                  HL7 Standards Development Updates:

                  On Monday April 8, the CQL Development Working Group (a CDS EBMonFHIR sub-WG) discussed Clinical Cohort Definition Language (CCDL) as intermediate query format (an intermediary between FHIR and CQL expression) and sent an email to the group in Germany that works on this to see if they would like to connect. 

                  On Tuesday April 9, the Statistics on FHIR Working Group updated the BaselineMeasureReport profile on the FEvIR platform based on the changed FHIR specification. There were changes to the description, the categories, and the section starter. The Viewing Tool was also updated. These changes have not yet been published.

                   

                  On Tuesday April 9, In the HL7 Biomedical Research and Regulation (BRR) Working Group meeting, discussion of the participant flow profile for EBM IG resulted in the suggestion, “to NOT use 'Consort' in the Profile names as these profiles will be used for retrospective studies and other studies for which CONSORT does not apply.” The full jira ticket can be viewed here: https://jira.hl7.org/browse/FHIR-44740.

                   

                   

                  On Thursday April 11, the EBM Implementation Guide Working Group (a CDS EBMonFHIR sub-WG) worked on:

                  Baseline Measure Report

                  Wrote descriptions for BaselineMeasureReport Profile of Composition Resource. These changes can be viewed using the following links : https://build.fhir.org/ig/HL7/ebm/composition.html, https://build.fhir.org/ig/HL7/ebm/StructureDefinition-baseline-measure-report.html.

                  Participant Flow Report

                  Renamed 2 Participant Flow profiles for clarity:

                  Profile of EvidenceVariable Resource renamed from ParticipantFlow to ParticipantFlowEvidenceVariable Profile

                  Profile of Evidence Resource renamed from ParticipantFlowMeasure to ParticipantFlowEvidence Profile

                  Created new profile:

                  Profile of Composition Resource named ParticipantFlowReport Profile of Composition Resource.

                   

                   

                   

                  Research Development Updates:

                   

                  On Tuesday April 9, the Measuring the Rate of Scientific Knowledge Transfer Working Group reviewed instructions for evaluating article citations. These instructions can be viewed under the heading “Investigation Description” on the project page here: https://fevir.net/radar/3

                   

                  Knowledge Ecosystem Liaison/Coordination Updates:

                   

                  On Tuesday April 9, the Ontology Management Working Group improved the GRADE code system translation spreadsheet including general instructions, contact information, and addition of the following columns: Translator 2, Name, Translator 2 Agreement, and language code. The order of the columns was changed so it can be filled from left to right. The group then reviewed the French translation of the GRADE certainty terms and set up the Spanish tab for the spreadsheet. The resulting spreadsheet can be viewed here: https://docs.google.com/spreadsheets/d/14wHnXAPkJmosVqEwbmUFeI5Gkn1lpN0ulZzBUXl7-5M/edit?usp=sharing

                   

                  On Wednesday April 10, the Funding the Ecosystem Infrastructure Working Group reviewed the product launch readiness checklist for making guidelines computable which was created last week. The group also discussed which developers might be involved. The group then discussed how to distinguish the provenance between that of the computable object and that of its substantive content. We decided we need a contributor extension that could be applied to any resource. This would be a necessary requirement for the Guideline Authoring Tool. 

                  On Friday April 12, the GRADE Ontology Working Group discussed progress on the GRADE Ontology Concept paper which will be presented at the May 1 GRADE meeting in Miami. Thank you to everyone who helped to edit the paper.

                  We then reviewed the developing protocol to add language translations to the GRADE Ontology.

                  Finally, we reviewed the terms that were open for vote last week. Indirectness in population received only positive votes and has been added to the code system.

                  Indirectness in exposure received one negative vote, was revised, and is now open for vote again.

                  A new term, Indirectness in comparator was also defined. There are currently two terms open for vote.

                  If you voted for The term "indirectness in exposure" before, please vote again, as changes have been made.

                  The meeting was recorded and can be viewed here.

                  Term and Voting Status

                  Definition

                  Alternative Terms
                  (if any)

                  Comment for application
                  (if any)

                  Differences between the population-related characteristics of the evidence and the intended target application.

                  Indirectness in population means there are differences between the populations in the relevant research studies and the population under consideration in a question of interest.

                  Differences in population-related characteristics (such as disease severity, social factors, or other determinants of health) may result in differences in the estimates derived from the evidence and those that would be seen in the intended target application.

                  The question of interest may vary with context, such as the key considerations for a guideline or systematic review.

                  Indirectness in exposure

                  Revised and Reopened for vote

                  Differences between the exposure-related characteristics of the evidence and the intended target application.

                  • Indirectness in intervention
                  • Indirectness in exposure or intervention

                  Interventions are a subset of exposures intended to change outcomes.



                  Indirectness in exposure means there are differences between the interventions or exposures in the relevant research studies and the interventions or exposures under consideration in a question of interest.

                  Differences in exposure-related characteristics (such as setting, intensity, mode of delivery, timing, co-interventions, and competency of person providing the intervention) may result in differences in the estimates derived from the evidence and those that would be seen in the intended target application.



                  The question of interest may vary with context, such as the key considerations for a guideline or systematic review.

                  Indirectness in comparator

                  Newly open for vote

                   

                  Differences between the comparator-related characteristics of the evidence and the intended target application.

                  • Indirectness in control exposure
                  • Indirectness in control exposure (comparator)

                  Comparators are exposures used as the control or reference value in comparative evidence.
                  Interventions are a subset of exposures intended to change outcomes.

                  Indirectness in comparator means there are differences between the interventions or exposures used as control exposures in the relevant research studies and the interventions or exposures used as control exposures under consideration in a question of interest.

                  Differences in comparator-related characteristics (such as setting, intensity, mode of delivery, timing, co-interventions, and competency of person providing the intervention) may result in differences in the estimates derived from the evidence and those that would be seen in the intended target application.



                  The question of interest may vary with context, such as the key considerations for a guideline or systematic review.

                   

                   

                  FEvIR Platform and Tools Development Updates:

                  On Thursday April 11, the Computable EBM Tools Development Working Group -reviewed Magic App Converter changes for efficiency and consistency with GRADEpro converter.

                   

                  Releases on the FEvIR Platform:

                  Computable Publishing®: Fast Evidence Interoperability Resources (FEvIR) Platform is available for use now, but is “pre-release”.  The current version is 0.212.0 (April 9, 2024). Viewing resources is open without login.  Signing in is free and required to create content (which can then only be edited by the person who created the content).

                  Release 0.212.0 (April 9, 2024) for editors of a resource, you can compare previous versions of the JSON by going to the "Usage View" tab and click the "Show Version Differences" button. Red and Green colors are used to show changed, deleted, or added values. Orange and Yellow colors are used to show moved entries in an array. (For example, if an entry was deleted or added and all the other entries shifted as a result.) The JSON version comparison can show "Just the Differences" or the entire "Full JSON" that also includes the unchanged elements.

                   

                  Computable Publishing®: MAGIC-to-FEvIR Converter version 0.14.0 (April 11, 2024) converts data from a MAGICapp JSON file (demo files available) to FEvIR Resources in FHIR JSON.

                  Release 0.14.0 (April 11, 2024) now converts much more quickly. It also creates Group resources for the population, Comparator Group, the Exposure Group, intervention definition and comparator definition. And it creates a MAGICapp-to-FHIR Conversion Report instead of a Project. It also no longer has a text field to enter a Project Name.

                  Quotes for thought

                  "Once I got into space, I was very comfortable in the universe. I felt like I had a right to be anywhere in this universe, that I belonged here as much as any speck of stardust, any comet, any planet." --Mae C. Jemison, Astronaut

                  "Concentrate all your thoughts upon the work at hand. The sun's rays do not burn until brought to a focus." -- Alexander Graham Bell

                  "Comfortable shoes and the freedom to leave are the two most important things in life." – Shel Silverstein

                  No man needs a vacation so much as the man who has just had one.” ― Elbert Hubbard

                  “My favorite machine at the gym is the vending machine.” — Caroline Rhea

                  Joanne Dehnbostel

                  unread,
                  Apr 17, 2024, 9:20:12 AM4/17/24
                  to Health Evidence Knowledge Accelerator (HEvKA)

                   

                  10 people (AK, BA, CE , HL, JD, KR, KS, KW, RC, SM) participated today in up to 3 active working group meetings.

                  The Measuring the Rate of Scientific Knowledge Transfer Working Group subtly changed the instructions for evaluating article citations to determine whether the citing article was intended to guide clinical practice. We added the word "directly" in two places in response to questions researchers had with evaluating a particular article which can be used as an example of the complexity in categorizing the citing articles. These instructions can be viewed under the heading “Investigation Description” on the project page here: https://fevir.net/radar/3

                  The Statistics on FHIR Working Group introduced new participants to the work we are doing to respond to a Request For Information (RFI) from NIH https://nexus.od.nih.gov/all/2024/02/28/seeking-ideas-on-using-common-data-elements-for-nih-supported-clinical-research/ and submitted the response.  

                  The Ontology Management Working Group continued to improve the GRADE code system translation spreadsheet general instructions for data entry and discussed adding additional languages to the spreadsheet.

                  Releases on the FEvIR Platform:

                  The Fast Evidence Interoperability Resources (FEvIR) Platform is available for use now, but is “pre-release”.  The current version is 0.214.0 (April 16, 2024). Viewing resources is open without login.  Signing in is free and required to create content (which can then only be edited by the person who created the content).

                  Release 0.214.0 (April 16, 2024) automatically generates section Narrative content (for the Table View) in SummaryOfFindings Profile Evidence Reports generated by the Converter Tools (from MAGICapp and GRADEpro).

                  Computable Publishing®: MAGIC-to-FEvIR Converter version 0.16.0 (April 16, 2024) converts data from a MAGICapp JSON file (demo files available) to

                  Release 0.16.0 (April 16, 2024) adjusts the output to match the Recommendation Profile in the EBMonFHIR Implementation Guide and provides a coding value for ‘Relative Risk’ in Evidence.statistic.statisticType elements.

                  The FEvIR®: RAte of Dissemination Assessment Research (RADAR) Tool Version 0.12.0 (April 16, 2024) facilitates research conduct and project management for an investigation to measure the rate of scientific knowledge transfer.

                  Release 0.12.0 (April 16, 2024) added pagination to two modals: Rate Index Articles and Evaluate Citations where it now shows 50 rows of articles per page so that the modals now load more quickly. The pagination bar that shows the different page numbers doesn’t appear if there are 50 rows or less since there’s only one page. Also the table headers have better alignment when scrolling.

                  Quote for thought: “Well done is better than well said.” – Benjamin Franklin

                  Psychedelics Home

                  unread,
                  Apr 17, 2024, 5:49:15 PM4/17/24
                  to EBMonFHIR
                  Best place to buy 4 Methylethcathinone drugs.Features
                  https://psychedelicshome.live/product/buy-4-mec-drug/

                  This product has been formulated as an exempt preparation which meets criteria. Buy 4 MEC Drug
                  Possession of a DEA Controlled Substance registration is not necessary nor is a DEA 222 form required to purchase this product
                  This product is intended to be used as an analytical reference standardBulk material is available for academic research at qualified institutions; please contact our sales department for pricing.

                  https://telegram.me/Nextdoormarket

                  Best DMT drugs for sale

                  https://psychedelicshome.live/product/buy-4-mec-drug/

                  Synonyms

                  4-methyl-N-ethyl Cathinone

                  4-Methylethcathinone (4-MEC) is a cathinone derivative identified in several designer, Best shop to buy 4-MEC drugs. recreational drugs that are sold as “legal high” replacements for controlled stimulants such as methamphetamine and 3,4-methylenedioxymethamphetamine.1,2,3 The hydrochloride formulation of this compound is intended for use as a standard for the forensic analysis of samples that may contain 4-Methylethcathinone

                  https://psychedelicshome.live/product/buy-4-mec-drug/

                   

                  Buy 4 Mec Drug Online 100% Discreet shipping DMT drugs for saleRecreational useSome users have injected the drug intravenously. This requires heating the water/4-MEC solution in order for 4-MEC to dissolve. Injecting 4-MEC appears to be rough on veins and is sometimes accompanied by a burning sensation; for this reason 4-MEC should be diluted as much as possible. Intravenous dosages are comparable to oral ones, although more care should be given to safety (with regard to possibility of overdose and long-term effects)https://psychedelicshome.live/product/buy-4-mec-drug/
                  https://telegram.me/Nextdoormarket

                  Joanne Dehnbostel

                  unread,
                  Apr 18, 2024, 2:26:45 AM4/18/24
                  to Health Evidence Knowledge Accelerator (HEvKA)



                   

                  8 people (BA, CE , JD, JO, KR, KS, MA, SK) participated today in up to 2 active working group meetings.

                  The Funding the Ecosystem Infrastructure Working Group reviewed and discussed the product launch readiness checklist for making guidelines computable and created an HL7 Jira request https://jira.hl7.org/browse/FHIR-45293. Titled: "Add contributorship extension to EvidenceReport Profile"

                  A common need in reporting evidence is to distinguish the "author" of a Resource in terms of academic/scholarly contributorship for creating the conceptual knowledge contained in the Resource content from the "author" of a Resource in terms of data entry for creating the structured form of the Resource content.  Creation of a contributorship extension used for a Composition Resource will provide this ability to make such a distinction. The contributorship extension should use the same structure that is found in the Citation.citedArtifact.contributorship element. This complex element will support many different uses cases for contributorship handling and will allow direct data transfer with Citation Resources.

                  These core FHIR structure changes will ultimately support the need to make guidelines computable. 

                  The Communications Working Group drafted an abstract for the GRADE Ontology Concept Paper.

                  The HL7 Clinical Decision Support working group considered two Jira requests which originated with our group.  

                  https://jira.hl7.org/browse/FHIR-45288: "Change RecommendationJustification examples to artifactReference(Recommendation)"

                  The RecommendationJustification examples currently reference a PlanDefinition Resource in the artifactReference element.

                  Modeling in the EBMonFHIR project has determined that referencing a Composition Resource (Recommendation Profile) would be a better example. 

                  and https://jira.hl7.org/browse/FHIR-45293. : "Add contributorship extension to EvidenceReport Profile" (created this morning). 

                  The first one was brought to a vote and passed, the second vote was delayed until next week.


                  Releases on the FEvIR Platform:

                  The Fast Evidence Interoperability Resources (FEvIR) Platform is available for use now, but is “pre-release”.  The current version is 0.215.0 (April 17, 2024). Viewing resources is open without login.  Signing in is free and required to create content (which can then only be edited by the person who created the content).

                  Release 0.215.0 (April 17, 2024) avoids detecting ‘differences’ in the comparison of Resource versions when the ‘difference’ is only the order of element names in the JSON structure.

                  The Computable Publishing®: Recommendation Authoring Tool version 0.14.0 (April 17, 2024) creates a Composition Resource with a Recommendation Profile and the associated Resources for a structured representation of a recommendation.

                  Release 0.14.0 (April 17, 2024) creates a RecommendationJustification Profile instance with artifactReference value referencing the Recommendation (i.e. the present Composition Resource).

                  FEvIR®: Recommendation Justification Builder/Viewer version 0.26.0 (April 17, 2024) creates and displays a RecommendationJustification Profile of an ArtifactAssessment Resource.

                  Release 0.26.0 (April 17, 2024) no longer displays Recommendation values derived from the artifactReference instance if referencing a PlanDefinition Resource (as this is now handled with a Composition Resource).

                  Computable Publishing®: MAGIC-to-FEvIR Converter version 0.17.0 (April 17, 2024) converts data from a MAGICapp JSON file (demo files available) to FEvIR Resources in FHIR JSON.

                  Release 0.17.0 (April 17, 2024) creates Recommendation (Profile of Composition) and RecommendationJustification (Profile of ArtifactAssessment) Resources including the population data.

                  The FEvIR®: RAte of Dissemination Assessment Research (RADAR) Tool Version 0.13.0 (April 17, 2024) facilitates research conduct and project management for an investigation to measure the rate of scientific knowledge transfer.

                  Release 0.13.0 (April 17, 2024) adds instructions for evaluating citing articles.

                  The FEvIR®: Bundle Builder/Viewer Tool version 0.1.0 (April 17, 2024) provides a human-friendly summary of a Bundle Resource.

                  Release 0.1.0 (April 17, 2024) introduces FEvIR®: Bundle Builder/Viewer to provide a human-friendly summary of a Bundle Resource.


                  Quote for thought: "Sometimes the most important thing in a whole day is the rest we take between two deep breaths."— Etty Hillesum

                  Joanne Dehnbostel

                  unread,
                  Apr 19, 2024, 9:32:09 AM4/19/24
                  to Health Evidence Knowledge Accelerator (HEvKA)


                  11 people (AN, BA, CE , ER, GL, IK, JD, KR, KS, SL, XZ) participated today in up to 2 active working group meetings. 

                  The EBM Implementation Guide Working Group (a CDS EBMonFHIR sub-WG) hosted the monthly GINTech meeting. The discussion centered around the use of the FHIR Citation Resource as it pertains to the representation of guideline evidence. The group spoke specifically about the issue of provenance and how to represent the authorship/contributorship of evidence at multiple levels including data entry and auto entry by a software tool. This discussion continued as people joined for the Computable EBM Tools Development Working Group meeting and resulted in a comment which was added to an HL7 FHIR tracker in JIra https://jira.hl7.org/browse/FHIR-45293. This Jira ticket will be discussed in next week's HL7 Clinical Decision Support Working Group meeting. 

                  "Discussion with a dozen participants in the EBMonFHIR IG CDS sub-WG/HEvKA/GINTech meeting suggested a clear need to disambiguate the 'academic' author and 'data entry' author roles but also avoid the complexity of the contributorship element.  The simplest solution for the EBMonFHIR IG is to modify the EvidenceReport Profile to add additional terms to the ContributorRole ValueSet binding for attester.mode"


                  Quote for thought: You have brains in your head. You have feet in your shoes. You can steer yourself any direction you choose. -Dr. Seuss

                  Joanne Dehnbostel

                  unread,
                  Apr 21, 2024, 12:46:38 AM4/21/24
                  to Health Evidence Knowledge Accelerator (HEvKA)


                  12 people (BA, CA-D, HK, HL, JD, JT, KS, KW, MA, PW, SM, TD) people participated in up to 3 active working group meetings.

                  The Risk of Bias Terminology Working Group found that of the 3 terms open for voting, 2 of the terms were approved and added to the code system. 1 term, publication bias, had a negative vote and was revised. The group also suggested adding links to connect related terms. An example of these links can be seen in the comment for application for publication bias. There is 1 Risk of Bias Term open for voting:

                  Term

                  Definition

                  Alternative Terms
                  (if any)

                  Comment for application
                  (if any)

                  publication bias-reopened for voting

                  A study selection bias in which the publicly available studies are not representative of all conducted studies.

                  • study selection bias due to selective reporting bias
                  • study selection bias due to non-reporting bias
                  • study selection bias due to reporting bias

                  Publication bias arises from the failure to identify all studies that have been conducted, either published (i.e., publicly available) or unpublished. The term 'studies' means evidence or research results in any form where such studies would meet the study eligibility criteria without consideration of criteria regarding the form of publication. The phrase 'publicly available studies' means the studies are available to the broad academic community and the public through established distribution channels in any form, including forms with restricted access. Established distribution channels include peer-reviewed journals, books, conference proceedings, dissertations, reports by governmental or research organizations, preprints, and study registries.

                  Publication bias often leads to an overestimate in the effect in favor of the study hypothesis, because studies with statistically significant positive results are more likely to be publicly available.

                  The terms reporting bias and selective reporting bias are used to describe biases in study reports, so the use of these terms in the context of study selection bias results in prepending 'study selection bias due to' to the term to avoid ambiguous use.

                  study selection bias-passed

                  A selection bias resulting from factors that influence study selection, from methods used to include or exclude studies for evidence synthesis, or from differences between the study sample and the population of interest

                  bias in study selection

                   

                  A study selection bias due to an inadequate process for screening and/or evaluating potentially eligible studies.

                   

                  An adequate process for screening and evaluating potentially eligible studies should generally include at least two independent reviewers for any steps that involve subjective judgment. Any step involving subjective judgment may introduce systematic distortions into the research findings.

                   

                  The Grade Ontology Working Group found that indirectness in exposure and indirectness in comparator received negative votes, they were refined and re-opened for vote. The group then worked on the definition of indirectness in outcome. There are currently 3 indirectness terms open for voting. The GRADE Concept paper has been submitted to be shared at the GRADE Working Group Meeting in Miami on May 1, 2024.

                  Term

                   Alternative Terms
                  (if any)

                  Comment for application
                  (if any)

                  Indirectness in exposure-revised and re-opened for vote

                  • Indirectness in intervention
                  • Indirectness in exposure or intervention

                  Interventions are a subset of exposures intended to change outcomes.

                  Indirectness in exposure means there are differences between the interventions or exposures in the relevant research studies and the intervention or exposure under consideration in a question of interest.

                  Differences in exposure-related characteristics (such as setting, intensity, mode of delivery, timing, co-interventions, and level of expertise of person providing the intervention) may result in differences in the estimates derived from the evidence and those that would be seen in the intended target application.



                  The question of interest may vary with context, such as the key considerations for a guideline or systematic review.

                  Indirectness in outcome-newly open for vote

                  Indirectness in outcome means there are differences between the outcomes in the relevant research studies and the outcome under consideration in a question of interest.

                  Differences in outcome-related characteristics (such as definition, method of measurement, and timing) may result in differences in the estimates derived from the evidence and those that would be seen in the intended target application.

                  Outcomes in the evidence may differ from those of primary interest. For instance, surrogate outcomes may not be directly important to patients but are measured in the presumption that changes in the surrogate outcome reflect changes in an outcome that is important to patients.



                  The question of interest may vary with context, such as the key considerations for a guideline or systematic review.

                  Indirectness in comparator-revised and reopened for voting

                    • Indirectness in control exposure
                    • Indirectness in control exposure (comparator)
                    • Indirectness in control intervention
                    • Indirectness in control exposure or control intervention
                    • Indirectness in reference exposure
                    • Indirectness in reference intervention

                    Comparators are exposures used as the control or reference value in comparative evidence. Interventions are a subset of exposures intended to change outcomes. Control interventions are a subset of control exposures that are used as a comparator intervention.

                    Indirectness in comparator means there are differences between the interventions or exposures used as control exposures in the relevant research studies and the interventions or exposures used as the control exposure under consideration in a question of interest.

                    Differences in comparator-related characteristics (such as setting, intensity, mode of delivery, timing, co-interventions, and level of expertise of person providing the intervention) may result in differences in the estimates derived from the evidence and those that would be seen in the intended target application.



                    The question of interest may vary with context, such as the key considerations for a guideline or systematic review.

                     

                     

                    The Project Management Working Group created the suggested agenda for next week:

                    Day/Time (Eastern)

                    Working Group

                    Agenda Items

                    Monday 8-9 am

                    Project Management

                    FHIR changes and EBMonFHIR Implementation Guide issues

                    Monday 9-10 am

                    Setting the Scientific Record on FHIR

                    SRDR+ to FEvIR Review

                    Monday 10-11 am

                    CQL Development (a CDS EBMonFHIR sub-WG)

                    Evaluate the focus for this working group

                    Monday 2-3 pm

                    Statistic Terminology

                    Review SEVCO terms (8 terms open for vote)

                    Tuesday 9 am-10 am

                    Measuring the Rate of Scientific Knowledge Transfer

                    Review Initial Pilot Progress

                    Tuesday 2-3 pm

                    StatisticsOnFHIR (a CDS EBMonFHIR sub-WG)

                    Review Baseline Measure Report Authoring Tool progress

                    Tuesday 3-4 pm

                    Ontology Management

                    Review Objectives and Progress

                    Wednesday 8-9 am

                    Funding the Ecosystem Infrastructure

                    Review progress on product launch readiness checklist for making guidelines computable

                    Wednesday 9-10 am

                    Communications (Awareness, Scholarly Publications)

                    Publications, presentations, website

                    Thursday 8-9 am

                    EBM Implementation Guide (a CDS EBMonFHIR sub-WG)

                    Review EBMonFHIR IG change requests

                    Thursday 9-10 am

                    Computable EBM Tools Development

                    Review progress with Converter tools

                    Friday 9-10 am

                    Risk of Bias Terminology

                    Review SEVCO terms (1 term [publication bias] open for vote)

                    Friday 10-11 am

                    GRADE Ontology

                    Term development (3 Indirectness terms open for vote)

                    Friday 12-1 pm

                    Project Management

                    Prepare Weekly Agenda

                     

                    Releases on the FEvIR Platform:

                    Computable Publishing®: MAGIC-to-FEvIR Converter version 0.18.0 (April 19, 2024) converts data from a MAGICapp JSON file (demo files available) to FEvIR Resources in FHIR JSON.

                    Release 0.18.0 (April 19, 2024) handles most of the full guideline content instead of single recommendations, and supports entering a MAGICapp ID to get the MAGICapp file by API.

                     

                    Quote for Thought: “Have you ever noticed that anybody driving slower than you is an idiot, and anyone going faster than you is a maniac?” — George Carlin

                    Joanne Dehnbostel

                    unread,
                    Apr 22, 2024, 11:23:46 AM4/22/24
                    to Health Evidence Knowledge Accelerator (HEvKA), Health Evidence Knowledge Accelerator (HEvKA) Weekly Update

                     

                     

                    Project Coordination  Updates:

                    26 people (AK, AN, BA, CA-D, CE , ER, GL, HK, HL, IK, JD, JJ, JO, JT, KR, KS, KW, MA, PW, RC, RL, SK, SL, SM, TD, XZ) from 12 countries (Belgium, Canada, China, Finland, Germany, Italy, Norway, Peru, Poland, Taiwan, UK, USA) participated in up to 14 active working group meetings.

                    On Monday April 15, the Project Management Working Group discussed and responded to a group that has volunteered to work on a Chinese translation of our developing GRADE Ontology. We now have the potential to share our work in French, Spanish, Portuguese, Norwegian and Chinese.

                     

                    On Wednesday April 17, the Communications Working Group drafted an abstract for the GRADE Ontology Concept Paper.

                     

                    On Friday April 19, the Project Management Working Group created the suggested agenda for next week:

                    Day/Time (Eastern)

                    Working Group

                    Agenda Items

                    Monday 8-9 am

                    Project Management

                    FHIR changes and EBMonFHIR Implementation Guide issues

                    Monday 9-10 am

                    Setting the Scientific Record on FHIR

                    SRDR+ to FEvIR Review

                    Monday 10-11 am

                    CQL Development (a CDS EBMonFHIR sub-WG)

                    Evaluate the focus for this working group

                    Monday 2-3 pm

                    Statistic Terminology

                    Review SEVCO terms (8 terms open for vote)

                    Tuesday 9 am-10 am

                    Measuring the Rate of Scientific Knowledge Transfer

                    Review Initial Pilot Progress

                    Tuesday 2-3 pm

                    StatisticsOnFHIR (a CDS EBMonFHIR sub-WG)

                    Review Baseline Measure Report Authoring Tool progress

                    Tuesday 3-4 pm

                    Ontology Management

                    Review Objectives and Progress

                    Wednesday 8-9 am

                    Funding the Ecosystem Infrastructure

                    Review progress on product launch readiness checklist for making guidelines computable

                    Wednesday 9-10 am

                    Communications (Awareness, Scholarly Publications)

                    Publications, presentations, website

                    Thursday 8-9 am

                    EBM Implementation Guide (a CDS EBMonFHIR sub-WG)

                    Review EBMonFHIR IG change requests

                    Thursday 9-10 am

                    Computable EBM Tools Development

                    Review progress with Converter tools

                    Friday 9-10 am

                    Risk of Bias Terminology

                    Review SEVCO terms (1 term [publication bias] open for vote)

                    Friday 10-11 am

                    GRADE Ontology

                    Term development (3 Indirectness terms open for vote)

                    Friday 12-1 pm

                    Project Management

                    Prepare Weekly Agenda




                    SEVCO Updates:

                    On Monday April 15, the Statistics Terminology Working Group reviewed the 5 terms open for vote last week and made the following revisions based on feedback received from the expert working group:

                    Terms approved by unanimous vote:
                      - absolute value
                      - area under the ROC curve

                    Terms will negative votes and feedback:
                      - quantile
                        A negative vote with a comment "Can we add information about quartiles" and we added more to the comment for application, and sent it back out for vote

                      - partial area under the ROC curve
                        A negative vote with a comment saying to add "and/or range of possible values for the true positive rate" which we added to the end of the definition and correct an issue with the comment for application to now say "1-specificity" instead of "1-sensitivity" and we sent it back out for vote.

                    We corrected the term "area under the ROC curve" to say "1-specificity" instead of "1-sensitivity".

                    - percentile
                      Added an alternative term "centile" and "%ile"
                      And opened it for vote

                    - decile
                      Added a defintion and comment for application for it based on the percentile comment.
                      And opened it for vote

                    - quartile
                      Added a definition for it and comment for application for it
                      And opened it for vote

                    - chi square for homogeneity
                      Added a definition
                      Added an alternative term "chi-square test of homogeneity"

                    We will need to reach out to a previous negative voter on "area under the value-by-time curve" and get them to vote on the term again with more information about the change they would like to see.

                    There are currently 8 statistic terms open for vote, (if you have voted on any of these terms before, please vote again):


                    Term

                    Definition

                    Alternative Terms
                    (if any)

                    Comment for application
                    (if any)

                    A statistic that represents the value for which the number of data points at or below it constitutes a specific portion of the total number of data points.

                    Quantile is a type of statistic but not used without specification of the portion it represents. For example, one may report a fortieth percentile (40%ile) but one does not report a percentile without specification of which percentile. One may report a first quartile (25%ile), second quartile (median or 50%ile), or third quartile (75%ile) but one does not report a quartile without specification of which quartile.

                    percentile

                    A quantile in which the specific portion of the number of data points is expressed as a percentage.

                    • centile
                    • %ile

                    Quantile is defined as a statistic that represents the value for which the number of data points at or below it constitutes a specific portion of the total number of data points.

                    Percentile is a type of statistic but not used without specification of the portion it represents. For example, one may report a fortieth percentile (40%ile) but one does not report a percentile without specification of which percentile. 40% of the data is at or below the 40%ile.

                    decile

                    A quantile in which the specific portion of the number of data points is expressed as a number of tenths.

                    Quantile is defined as a statistic that represents the value for which the number of data points at or below it constitutes a specific portion of the total number of data points.

                    Decile is a type of statistic but not used without specification of the portion it represents. For example, one may report a fourth decile but one does not report a decile without specification of which decile. 40% of the data is at or below the fourth decile.

                    quartile

                    A quantile in which the specific portion of the number of data points is expressed as a number of fourths.

                    Quantile is defined as a statistic that represents the value for which the number of data points at or below it constitutes a specific portion of the total number of data points.

                    Quartile is a type of statistic but not used without specification of the portion it represents. For example, one may report a third quartile but one does not report a quartile without specification of which quartile. 75% of the data is at or below the third quartile. The second quartile is also called the median.

                    chi square for homogeneity

                    A measure of heterogeneity, based on the chi-square statistic, for reporting an analytic finding regarding whether two or more multinomial distributions are equal, accounting for chance variability.

                    • chi-square test of homogeneity
                    • chi2 heterogeneity statistic

                    A measure of heterogeneity is defined as statistic that represents the variation or spread among values in the set of estimates across studies.

                    area under the ROC curve

                    An area under the curve where the curve is the true positive rate and the range of interest is the false positive rate.

                    • AUC
                    • AUROC
                    • area under the receiver operating characteristic curve
                    • c-statistic
                    • C-statistic
                    • Harrell's C
                    • concordance index
                    • concordance statistic

                    ROC stands for Receiver Operating Characteristic. The area under the ROC curve is used to assess the performance of a classifier used to distinguish between two or more groups. Another term for true positive rate is sensitivity and another term for false positive rate is 1-specificity.



                    The c-statistic is the area under the ROC curve calculated with the full range of possible values for true positive rate and false positive rate. Another interpretation of the c-statistic is similar without explicitly referencing the ROC curve: "The C statistic is the probability that, given 2 individuals (one who experiences the outcome of interest and the other who does not or who experiences it later), the model will yield a higher risk for the first patient than for the second. It is a measure of concordance (hence, the name “C statistic”) between model-based risk estimates and observed events. C statistics measure the ability of a model to rank patients from high to low risk but do not assess the ability of a model to assign accurate probabilities of an event occurring (that is measured by the model’s calibration). C statistics generally range from 0.5 (random concordance) to 1 (perfect concordance)." (JAMA. 2015;314(10):1063-1064. doi:10.1001/jama.2015.11082)

                    An area under the curve where the curve is the true positive rate and the range of interest is a specified portion of the range of possible values for the false positive rate and/or range of possible values for the true positive rate.

                    Area under the ROC curve is defined as an area under the curve where the curve is the true positive rate and the range of interest is the false positive rate. ROC stands for Receiver Operating Characteristic. The area under the ROC curve is used to assess the performance of a classifier used to distinguish between two or more groups. Another term for true positive rate is sensitivity and another term for false positive rate is 1-specificity.

                    area under the value-by-time curve

                    An area under the curve where the curve is the repeated measures of a variable over time and the domain of interest is time.

                    • area under the value-time curve
                    • area under the value vs. time curve
                    • area under the value versus time curve

                    The area under the value-by-time curve is used for pharmacokinetics, pharmacodynamics, and physiological monitoring.



                    On Friday April 19, the Risk of Bias Terminology Working Group found that of the 3 terms open for voting, 2 of the terms were approved and added to the code system. 1 term, publication bias, had a negative vote and was revised. The group also suggested adding links to connect related terms. An example of these links can be seen in the comment for application for publication bias. There is 1 Risk of Bias Term open for voting:

                    Term

                    Definition

                    Alternative Terms
                    (if any)

                    Comment for application
                    (if any)

                    publication bias-reopened for voting

                    A study selection bias in which the publicly available studies are not representative of all conducted studies.

                    • study selection bias due to selective reporting bias
                    • study selection bias due to non-reporting bias
                    • study selection bias due to reporting bias

                    Publication bias arises from the failure to identify all studies that have been conducted, either published (i.e., publicly available) or unpublished. The term 'studies' means evidence or research results in any form where such studies would meet the study eligibility criteria without consideration of criteria regarding the form of publication. The phrase 'publicly available studies' means the studies are available to the broad academic community and the public through established distribution channels in any form, including forms with restricted access. Established distribution channels include peer-reviewed journals, books, conference proceedings, dissertations, reports by governmental or research organizations, preprints, and study registries.

                    Publication bias often leads to an overestimate in the effect in favor of the study hypothesis, because studies with statistically significant positive results are more likely to be publicly available.

                    The terms reporting bias and selective reporting bias are used to describe biases in study reports, so the use of these terms in the context of study selection bias results in prepending 'study selection bias due to' to the term to avoid ambiguous use.

                    study selection bias-passed

                    A selection bias resulting from factors that influence study selection, from methods used to include or exclude studies for evidence synthesis, or from differences between the study sample and the population of interest

                    bias in study selection

                     

                    bias in study selection process-passed

                    A study selection bias due to an inadequate process for screening and/or evaluating potentially eligible studies.

                     

                    An adequate process for screening and evaluating potentially eligible studies should generally include at least two independent reviewers for any steps that involve subjective judgment. Any step involving subjective judgment may introduce systematic distortions into the research findings.

                     



                     

                    FEvIR Platform and Tools Development Updates:

                    The Setting the Scientific Record on FHIR Working Group worked on development of a common solution across converter tools  (including MAGICapp and GRADEpro) to autogenerate the section narrative content for comparative evidence reports.   

                     

                    HL7 Standards Development Updates:

                    On Monday April 15, the CQL Development Working Group(a CDS EBMonFHIR sub-WG) worked with SRDR to plan how they will bulk load a project on the FEvIR Platform and generally discussed API interactions and resource identity management (FOI's).

                     

                    On Tuesday April 16, the Statistics on FHIR Working Group introduced new participants to the work we are doing to respond to a Request For Information (RFI) from NIH https://nexus.od.nih.gov/all/2024/02/28/seeking-ideas-on-using-common-data-elements-for-nih-supported-clinical-research/ and submitted the response.  

                     

                    On Wednesday April 17, the HL7 Clinical Decision Support working group considered two Jira requests which originated with our group.  

                    https://jira.hl7.org/browse/FHIR-45288: "Change RecommendationJustification examples to artifactReference(Recommendation)"

                    The RecommendationJustification examples currently reference a PlanDefinition Resource in the artifactReference element.

                    Modeling in the EBMonFHIR project has determined that referencing a Composition Resource (Recommendation Profile) would be a better example. 

                    and https://jira.hl7.org/browse/FHIR-45293. : "Add contributorship extension to EvidenceReport Profile" (created this morning). 

                    The first one was brought to a vote and passed, the second vote was delayed until next week.




                    On Thursday April 18, the EBM Implementation Guide Working Group (a CDS EBMonFHIR sub-WG) hosted the monthly GINTech meeting. The discussion centered around the use of the FHIR Citation Resource as it pertains to the representation of guideline evidence. The group spoke specifically about the issue of provenance and how to represent the authorship/contributorship of evidence at multiple levels including data entry and auto entry by a software tool. This discussion continued as people joined for the Computable EBM Tools Development Working Group meeting and resulted in a comment which was added to an HL7 FHIR tracker in Jira https://jira.hl7.org/browse/FHIR-45293. This Jira ticket will be discussed in next week's HL7 Clinical Decision Support Working Group meeting. 

                    "Discussion with a dozen participants in the EBMonFHIR IG CDS sub-WG/HEvKA/GINTech meeting suggested a clear need to disambiguate the 'academic' author and 'data entry' author roles but also avoid the complexity of the contributorship element.  The simplest solution for the EBMonFHIR IG is to modify the EvidenceReport Profile to add additional terms to the ContributorRole ValueSet binding for attester.mode"

                     

                     

                    Research Development Updates:

                    On Tuesday April 16, the Measuring the Rate of Scientific Knowledge Transfer Working Group subtly changed the instructions for evaluating article citations to determine whether the citing article was intended to guide clinical practice. We added the word "directly" in two places in response to questions researchers had with evaluating a particular article which can be used as an example of the complexity in categorizing the citing articles. These instructions can be viewed under the heading “Investigation Description” on the project page here: https://fevir.net/radar/3

                     

                     

                    Knowledge Ecosystem Liaison/Coordination Updates:

                     

                    On Tuesday April 16, the Ontology Management Working Group continued to improve the GRADE code system translation spreadsheet general instructions for data entry and discussed adding additional languages to the spreadsheet.

                     

                    On Wednesday April 17, the Funding the Ecosystem Infrastructure Working Group reviewed and discussed the product launch readiness checklist for making guidelines computable and created an HL7 Jira request https://jira.hl7.org/browse/FHIR-45293. Titled: "Add contributorship extension to EvidenceReport Profile"

                    A common need in reporting evidence is to distinguish the "author" of a Resource in terms of academic/scholarly contributorship for creating the conceptual knowledge contained in the Resource content from the "author" of a Resource in terms of data entry for creating the structured form of the Resource content.  Creation of a contributorship extension used for a Composition Resource will provide this ability to make such a distinction. The contributorship extension should use the same structure that is found in the Citation.citedArtifact.contributorship element. This complex element will support many different uses cases for contributorship handling and will allow direct data transfer with Citation Resources.

                    These core FHIR structure changes will ultimately support the need to make guidelines computable. 

                     

                     

                    The Grade Ontology Working Group found that indirectness in exposure and indirectness in comparator received negative votes, they were refined and re-opened for vote. The group then worked on the definition of indirectness in outcome. There are currently 3 indirectness terms open for voting. The GRADE Concept paper has been submitted to be shared at the GRADE Working Group Meeting in Miami on May 1, 2024.

                    Term

                     Alternative Terms
                    (if any)

                    Comment for application
                    (if any)

                    Indirectness in exposure-revised and re-opened for vote

                    • Indirectness in intervention
                    • Indirectness in exposure or intervention

                    Interventions are a subset of exposures intended to change outcomes.

                    Indirectness in exposure means there are differences between the interventions or exposures in the relevant research studies and the intervention or exposure under consideration in a question of interest.

                    Differences in exposure-related characteristics (such as setting, intensity, mode of delivery, timing, co-interventions, and level of expertise of person providing the intervention) may result in differences in the estimates derived from the evidence and those that would be seen in the intended target application.



                    The question of interest may vary with context, such as the key considerations for a guideline or systematic review.

                    Indirectness in outcome-newly open for vote


                    Indirectness in outcome means there are differences between the outcomes in the relevant research studies and the outcome under consideration in a question of interest.

                    Differences in outcome-related characteristics (such as definition, method of measurement, and timing) may result in differences in the estimates derived from the evidence and those that would be seen in the intended target application.

                    Outcomes in the evidence may differ from those of primary interest. For instance, surrogate outcomes may not be directly important to patients but are measured in the presumption that changes in the surrogate outcome reflect changes in an outcome that is important to patients.

                    The question of interest may vary with context, such as the key considerations for a guideline or systematic review.

                    Indirectness in comparator-revised and reopened for voting

                    • Indirectness in control exposure
                    • Indirectness in control exposure (comparator)
                    • Indirectness in control intervention
                    • Indirectness in control exposure or control intervention
                    • Indirectness in reference exposure
                    • Indirectness in reference intervention

                    Comparators are exposures used as the control or reference value in comparative evidence. Interventions are a subset of exposures intended to change outcomes. Control interventions are a subset of control exposures that are used as a comparator intervention.

                    Indirectness in comparator means there are differences between the interventions or exposures used as control exposures in the relevant research studies and the interventions or exposures used as the control exposure under consideration in a question of interest.

                    Differences in comparator-related characteristics (such as setting, intensity, mode of delivery, timing, co-interventions, and level of expertise of person providing the intervention) may result in differences in the estimates derived from the evidence and those that would be seen in the intended target application.



                    The question of interest may vary with context, such as the key considerations for a guideline or systematic review.

                     

                    Releases on the FEvIR Platform:

                    The Fast Evidence Interoperability Resources (FEvIR) Platform is available for use now, but is “pre-release”.  The current version is 0.215.0 (April 17, 2024). Viewing resources is open without login.  Signing in is free and required to create content (which can then only be edited by the person who created the content).

                    Release 0.213.0 (April 15, 2024) adds an alert of "All resources successfully submitted." to the bulk JSON entry page  and automatically generates section narrative from the referenced entry Resources for Comparative Evidence Reports generated by Converter Tools. The citation element now uses markdown features when using the citation element of RelatedArtifact datatype.

                     

                    Release 0.214.0 (April 16, 2024) automatically generates section Narrative content (for the Table View) in SummaryOfFindings Profile Evidence Reports generated by the Converter Tools (from MAGICapp and GRADEpro).

                     

                    Release 0.215.0 (April 17, 2024) avoids detecting ‘differences’ in the comparison of Resource versions when the ‘difference’ is only the order of element names in the JSON structure.

                     

                     

                    The Computable Publishing®: Comparative Evidence Report Authoring Tool version 0.20.0 (April 15, 2024) creates and displays a Composition Resource with a ComparativeEvidenceReport Profile.

                    Release 0.20.0 (April 15, 2024) provides "Generate Natural Language Summary" buttons in all sections to automatically generate a section narrative from the referenced entry (or focus) Resource, and supports immediate saving of changes to referenced Resources.

                     

                    Computable Publishing®: MAGIC-to-FEvIR Converter version 0.18.0 (April 19, 2024) converts data from a MAGICapp JSON file (demo files available) to FEvIR Resources in FHIR JSON.

                    Release 0.15.0 (April 15, 2024) creates statistic.description content for each of the Evidence.statistic instances generated.

                    Release 0.16.0 (April 16, 2024) adjusts the output to match the Recommendation Profile in the EBMonFHIR Implementation Guide and provides a coding value for ‘Relative Risk’ in Evidence.statistic.statisticType elements.

                    Release 0.17.0 (April 17, 2024) creates Recommendation (Profile of Composition) and RecommendationJustification (Profile of ArtifactAssessment) Resources including the population data.

                    Release 0.18.0 (April 19, 2024) handles most of the full guideline content instead of single recommendations, and supports entering a MAGICapp ID to get the MAGICapp file by API.

                     

                     

                    The FEvIR®: RAte of Dissemination Assessment Research (RADAR) Tool Version 0.13.0 (April 17, 2024) facilitates research conduct and project management for an investigation to measure the rate of scientific knowledge transfer.

                    Release 0.12.0 (April 16, 2024) added pagination to two modals: Rate Index Articles and Evaluate Citations where it now shows 50 rows of articles per page so that the modals now load more quickly. The pagination bar that shows the different page numbers doesn’t appear if there are 50 rows or less since there’s only one page. Also the table headers have better alignment when scrolling.

                    Release 0.13.0 (April 17, 2024) adds instructions for evaluating citing articles.

                     

                    The Computable Publishing®: Recommendation Authoring Tool version 0.14.0 (April 17, 2024) creates a Composition Resource with a Recommendation Profile and the associated Resources for a structured representation of a recommendation.

                    Release 0.14.0 (April 17, 2024) creates a RecommendationJustification Profile instance with artifactReference value referencing the Recommendation (i.e. the present Composition Resource).

                    FEvIR®: Recommendation Justification Builder/Viewer version 0.26.0 (April 17, 2024) creates and displays a RecommendationJustification Profile of an ArtifactAssessment Resource.

                    Release 0.26.0 (April 17, 2024) no longer displays Recommendation values derived from the artifactReference instance if referencing a PlanDefinition Resource (as this is now handled with a Composition Resource).

                    The FEvIR®: Bundle Builder/Viewer Tool version 0.1.0 (April 17, 2024) provides a human-friendly summary of a Bundle Resource.

                    Release 0.1.0 (April 17, 2024) introduces FEvIR®: Bundle Builder/Viewer to provide a human-friendly summary of a Bundle Resource.



                     

                    Quotes for Thought:

                     “Spread love everywhere you go. Let no one ever come to you without leaving happier.” -Saint Teresa of Calcutta

                     “Well done is better than well said.” – Benjamin Franklin

                    "Sometimes the most important thing in a whole day is the rest we take between two deep breaths."— Etty Hillesum

                     “You have brains in your head. You have feet in your shoes. You can steer yourself any direction you choose.” -Dr. Seuss

                     “Have you ever noticed that anybody driving slower than you is an idiot, and anyone going faster than you is a maniac?” — George Carlin

                     

                     

                    To get involved or stay informed: HEvKA Project Page on FEvIR Platform, HEvKA Project Page on HL7 Confluence, or join any of the groups that are now meeting in the following weekly schedule:

                    Joanne Dehnbostel

                    unread,
                    Apr 23, 2024, 1:32:24 PM4/23/24
                    to Health Evidence Knowledge Accelerator (HEvKA)

                    8 people (BA, CE , HL, JD, KS, KW, MA, RL) participated in up to 4 active working group meetings. 

                     

                    The Project Management Working Group discussed the two new software tools on the FEvIR Platform, The FEvIR®: Bundle Builder/Viewer Tool and Computable Publishing®: Baseline Measure Report Authoring Tool

                    The Setting the Scientific Record on FHIR Working Group met with SRDR and discussed The FEvIR®: Bundle Builder/Viewer Tool and how it can be used to bulk load content to the FEvIR platform using the API.

                    The CQL Development Working Group  (a CDS EBMonFHIR sub-WG) discussed the focus for this working group and decided to focus our attention on trial matching and study protocol representation. The group will be renamed, the Clinical Research on FHIR Working Group. We will ask the Biomedical Research and Regulation and Clinical Decision Support HL7 working groups how they would like to coordinate with us. 

                    The Statistic Terminology Working Group found the following results from the terms that were open for voting last week:

                    "quantile" didn't pass. We updated the comment for application for the term "quantile" and sent it back out for voting.

                    Quantile is a type of statistic but not used without specification of the portion it represents. Typically, the specification of the portion it represents includes both the number of equal portions (e.g., percentile for 100 equal portions, or quartile for 4 equal portions) and selection of one of these portions (e.g., 25th percentile or first quartile). For common uses in communicating statistic values, more specific types of quantiles (such as percentile, decile, or quartile) would be used instead of the term *quantile*

                    "percentile" didn't pass and we added onto the comment for application.

                    "quartile" passed, but we updated the comment for application for the term "quartile" and sent it back out for voting, and added child concepts: "first quartile" and "third quartile"

                    "decile" passed and we reopened it for vote because we added onto the comment for application.

                    "area under the ROC curve" and "partial area under the ROC curve" both passed the vote.

                    The group then defined additional terms. There are currently 8 terms open for voting:

                     

                    Term

                    Definition

                    Alternative Terms
                    (if any)

                    Comment for application
                    (if any)

                    A statistic that represents the value for which the number of data points at or below it constitutes a specific portion of the total number of data points.

                    Quantile is a type of statistic but not used without specification of the portion it represents. Typically, the specification of the portion it represents includes both the number of equal portions (e.g., percentile for 100 equal portions, or quartile for 4 equal portions) and selection of one of these portions (e.g., 25th percentile or first quartile).

                    For common uses in communicating statistic values, more specific types of quantiles (such as percentile, decile, or quartile) would be used instead of the term quantile.

                    percentile

                    A quantile in which the specific portion of the number of data points is expressed as a percentage.

                    • centile
                    • %ile

                    Quantile is defined as a statistic that represents the value for which the number of data points at or below it constitutes a specific portion of the total number of data points.

                    Percentile is a type of statistic but not used to define a statistic value without specification of the portion it represents. For example, one may report a fortieth percentile (40%ile) but one does not report a percentile without specification of which percentile. 40% of the data is at or below the 40%ile.

                    Most statistic values can be reported in FHIR Evidence Resources with a statisticType element including the SEVCO term as a CodeableConcept. To report a specific percentile (such as the fortieth percentile), one may use the attributeEstimate element containing a type element with the SEVCO term for percentile as a CodeableConcept and a level element with the corresponding value (such as 40).

                    decile

                    A quantile in which the specific portion of the number of data points is expressed as a number of tenths.

                    Quantile is defined as a statistic that represents the value for which the number of data points at or below it constitutes a specific portion of the total number of data points.

                    Decile is a type of statistic but not used to define a statistic value without specification of the portion it represents. For example, one may report a fourth decile but one does not report a decile without specification of which decile. 40% of the data is at or below the fourth decile.

                    Most statistic values can be reported in FHIR Evidence Resources with a statisticType element including the SEVCO term as a CodeableConcept. To report a specific decile (such as the fourth decile), one may use the attributeEstimate element containing a type element with the SEVCO term for decile as a CodeableConcept and a level element with the corresponding value (such as 4).

                    quartile

                    A quantile in which the specific portion of the number of data points is expressed as a number of fourths.

                    Quantile is defined as a statistic that represents the value for which the number of data points at or below it constitutes a specific portion of the total number of data points.

                    Quartile is a type of statistic but not used to define a statistic value without specification of the portion it represents. For example, one may report a third quartile but one does not report a quartile without specification of which quartile. 75% of the data is at or below the third quartile. The second quartile is also called the median.

                    To report the first quartile, use the SEVCO term for first quartile. To report the second quartile, use the SEVCO term for median. To report the third quartile, use the SEVCO term for third quartile. To report the fourth quartile, use the SEVCO term for maximum observed value.

                    first quartile

                    A quantile for which the number of data points at or below it constitutes a 25% of the total number of data points.

                    third quartile

                    A quantile for which the number of data points at or below it constitutes a 75% of the total number of data points.

                    chi square for homogeneity

                    A measure of heterogeneity, based on the chi-square statistic, for reporting an analytic finding regarding whether two or more multinomial distributions are equal, accounting for chance variability.

                    • chi-square test of homogeneity
                    • chi2 heterogeneity statistic

                    A measure of heterogeneity is defined as statistic that represents the variation or spread among values in the set of estimates across studies.

                    An area under the curve where the curve is the repeated measures of a variable over time and the domain of interest is time.

                    • area under the value-time curve
                    • area under the value vs. time curve
                    • area under the value versus time curve

                    The area under the value-by-time curve is used for pharmacokinetics, pharmacodynamics, and physiological monitoring.

                    Releases on the FEvIR Platform:

                    Computable Publishing®: Baseline Measure Report Authoring Tool creates a Composition Resource with a BaselineMeasureReport Profile. The current version is 0.1.0 (April 22, 2024)

                    Release 0.1.0 (April 22, 2024) introduces Computable Publishing®: Baseline Measure Report Viewing Tool to provide a human-friendly summary of a Composition Resource with a BaselineMeasureReport Profile display of references to Group Resources (for related Total Group, Intervention Group, and Comparator Group) and Baseline Measures (with EvidenceVariable and Evidence Resources), and supports data entry for Profile-specific related artifact references to the Group Resources and Profile-specific sections for Baseline Measures (with EvidenceVariable and Evidence Resources).

                     

                     

                     

                    To get involved or stay informed: HEvKA Project Page on FEvIR Platform, HEvKA Project Page on HL7 Confluence, or join any of the groups that are now meeting in the following weekly schedule:

                    Weekly Meeting Schedule and Link:

                     

                    Day

                    Time (Eastern)

                    Team

                    Monday 

                    8-9 am 

                    Project Management

                    Monday

                    9-10 am

                    Setting the Scientific Record on FHIR WG

                    Monday 

                    10-11 am

                    Clinical Research on FHIR WG (a CDS EBMonFHIR sub-WG)

                    Joanne Dehnbostel

                    unread,
                    Apr 24, 2024, 11:52:58 AM4/24/24
                    to Health Evidence Knowledge Accelerator (HEvKA)

                    9 people (AK, BA, CE , CM, JD, KS, KW, RC, SM) participated in up to 3 active working group meetings.

                    The Measuring the Rate of Scientific Knowledge Transfer Working Group continued to discuss the user interface for the FEvIR®: RAte of Dissemination Assessment Research (RADAR) Tool including making the user experience better by tapping into the users own repository support url to allow the user to more seamlessly obtain access to the full text of articles that they have permission to access.  

                    The StatisticsOnFHIR Working Group (a CDS EBMonFHIR sub-WG) the group viewed a demonstration of  the new Computable Publishing®: Baseline Measure Report Authoring Tool and discussed possible improvements to the user interface and workflow. 

                    The Ontology Management Working Group set up the sheet in the Excel spreadsheet that will be used for the Portuguese translation of the GRADE Ontology project and discussed details of data entry.

                    We will invite the group from GRADE Columbia to our meeting in two weeks to discuss the Spanish translation with our Spanish translators. We will meet with the Chinese translators at 8am Eastern on Friday.

                    Releases on the FEvIR Platform:

                    The Fast Evidence Interoperability Resources (FEvIR) Platform is available for use now, but is “pre-release”.  The current version is 0.216.0 (April 23, 2024). Viewing resources is open without login.  Signing in is free and required to create content (which can then only be edited by the person who created the content).

                    FEvIR Platform Release 0.216.0 (April 23, 2024) supports conversion of a FHIR Bundle Resource of transaction type submitted to the FEvIR Platform via API into a set of FHIR Resources on the FEvIR Platform, each with a unique FEvIR Object Identifier.

                    FEvIR® API version 0.7.0 (April 23, 2024) supports GET and POST requests to retrieve Resources from the FEvIR Platform, retrieve concept JSON from CodeSystem Resources on the FEvIR Platform, or convert a ClinicalTrials.gov record to a FHIR Bundle Resource.

                    FEvIR API Release 0.7.0 (April 23, 2024) supports a POST request to submit a FHIR Resource to the FEvIR Platform.


                    Quote for thought: "Worry never robs tomorrow of its sorrow, it only saps today of its joy." -- Leo Buscaglia

                    Joanne Dehnbostel

                    unread,
                    Apr 25, 2024, 2:35:49 PM4/25/24
                    to Health Evidence Knowledge Accelerator (HEvKA)
                    6 people (BA, CE , JD, JO, JT, KS) participated in up to 2 active working group meetings. 

                    Both the Funding the Ecosystem Infrastructure Working Group and the Communications Working Group discussed and developed the new Computable Publishing®: Baseline Measure Report Authoring Tool  (version 0.2.0 release mentioned below).

                    Releases on the FEvIR Platform:

                    The Fast Evidence Interoperability Resources (FEvIR) Platform is available for use now, but is “pre-release”.  The current version is 0.217.0 (April 24, 2024). Viewing resources is open without login.  Signing in is free and required to create content (which can then only be edited by the person who created the content).

                    Release 0.217.0 (April 24, 2024) supports the editing of selected portions of Resource content for Group, Evidence, EvidenceVariable and ResearchStudy Resources when editing a Reference Datatype value, thus facilitating data entry across multiple Resources without requiring the use of multiple Builder Tools.

                    Computable Publishing®: Baseline Measure Report Authoring Tool creates a Composition Resource with a BaselineMeasureReport Profile. The current version is 0.2.0 (April 24, 2024)

                    Baseline Measure Report Authoring Tool Release 0.2.0 (April 24, 2024) adds instructions and simplifies the Groups section and uses the FEvIR Platform function for editing selected portions of Group Resources (for the related Total Group, Intervention Group, and Comparator Group), EvidenceVariable Resources (for the baseline measures), and Evidence Resources (for the statistical results).

                    The Computable Publishing®: Comparative Evidence Report Authoring Tool version 0.21.0 (April 24, 2024) creates and displays a Composition Resource with a ComparativeEvidenceReport Profile.

                    Comparative Evidence Report Authoring Tool Release 0.21.0 (April 24, 2024) simplifies data entry by using the FEvIR Platform function for editing selected portions of Group Resources, EvidenceVariable Resources, Evidence Resources, and a ResearchStudy Resource referenced throughout the content.

                    Quote for thought: "The most difficult thing is the decision to act, the rest is merely tenacity." — Amelia Earhart

                    Joanne Dehnbostel

                    unread,
                    Apr 26, 2024, 11:47:03 AM4/26/24
                    to Health Evidence Knowledge Accelerator (HEvKA)

                    7 people (BA, CE , IK, JD, KS, MH, SL) participated in up to 2 active working group meetings.


                    The EBM Implementation Guide Working Group (a CDS EBMonFHIR sub-WG) discussed the need to update the M-11 representation in FHIR to reflect changes made in the specification since we first started working on it last year https://jira.hl7.org/browse/FHIR-45357. There are 15 sections to update, so far 10 have been completed. We need to have this done before testing at the May 2024 HL7 Connectathon. 

                    We also discussed proposed changes to the FHIR standard regarding contributorship  https://jira.hl7.org/browse/FHIR-45293.


                    The Computable EBM Tools Development Working Group looked at the MAGIC app converter tool on the FEvIR Platform and demonstrated the conversion of two MAGIC files (4321 and 4311) resulting in 631 and over 900 FHIR Resources on the FEvIR Platform respectively! These can be seen by clicking on the following links -  https://fevir.net/resources/Composition/236363  and https://fevir.net/resources/Composition/238300 The second link demonstrates the ability to add graphical images to the viewer. 


                    Quote for thought: “The difference between ordinary and extraordinary is that little extra.” — Jimmy Johnson, football player

                    Joanne Dehnbostel

                    unread,
                    Apr 28, 2024, 3:55:25 AM4/28/24
                    to Health Evidence Knowledge Accelerator (HEvKA)

                     

                    Announcements:

                    Please join us for the HL7 EBMonFHIR track kickoff for the May 2024 HL7 Connectathon which will take place during the Monday April 29, HL7 Learning Health Systems (LHS) Working Group Meeting at 3pm Eastern time. Please join via this Zoom link https://hl7-org.zoom.us/j/6272738251 Visit http://www.hl7.org/concalls/CallDetails.cfm?concall=68389


                    Several HEvKA meetings will be cancelled this week to allow for participation in the GRADE Working Group Meeting in Miami.  We will be presenting our GRADE Ontology Concept paper at that time. We hope to see some of you in Miami. Please see the proposed agenda below for cancellation details.


                    16 people (BA, CA-D, HK, HL, JB, JD, JX, KS, KW, LW, MA, PW, SM, T-XL, TD, YF) from 8 countries (Belgium, Canada, Chile/Spain, China, Norway, Peru, UK, USA) participated in up to 3 active working group meetings. 


                    The Risk of Bias Terminology Working Group found that the term open for voting last week received a negative vote which was addressed in the meeting and the term was reopened for voting. The group then worked to define Language Bias and Geography Bias. There are currently 3 Risk of Bias terms open for voting:

                    Term
                    Definition
                    Alternative Terms
                    (if any)
                    Comment for application
                    (if any)
                    A bias in search strategy or study eligibility criteria that results from restrictions regarding the language of the study report.

                    Limiting the study reports included in a systematic review by language may result in an incomplete view of the truly available evidence.

                    The terms search strategy limits for study report characteristics not appropriate and study eligibility criteria limits for study report characteristics not appropriate are used to describe types of study selection bias that results from restrictions regarding the study report, distinguishing different steps in the search and selection process.

                    A bias in search strategy or study eligibility criteria that results from restrictions regarding the geographic origin of the research.

                    Limiting the study reports included in a systematic review by country of origin may result in an incomplete view of the truly available evidence.

                    The geographic origin of the research may refer to the location of the research participants, the investigators, or their associated organizations.

                    The terms search strategy limits for study report characteristics not appropriate , study eligibility criteria limits for study characteristics not appropriate and study eligibility criteria limits for study report characteristics not appropriate are used to describe types of study selection bias that results from restrictions regarding the study report, distinguishing different steps in the search and selection process.

                    A study selection bias in which the publicly available studies are not representative of all conducted studies.
                    • study selection bias due to selective reporting bias
                    • study selection bias due to non-reporting bias
                    • study selection bias due to reporting bias

                    Publication bias arises from the failure to identify all studies that have been conducted, either published (i.e., publicly available) or unpublished. The term 'studies' means evidence or research results in any form where such studies would meet the study eligibility criteria without consideration of criteria regarding the form of publication. The phrase 'publicly available studies' means the studies are available to the broad academic community and the public through established distribution channels in any form, including forms with restricted access. Established distribution channels include peer-reviewed journals, books, conference proceedings, dissertations, reports by governmental or research organizations, preprints, and study registries.

                    Publication bias often leads to an overestimate in the effect in favor of the study hypothesis, because studies with statistically significant positive results are more likely to be publicly available.

                    The terms reporting bias and selective reporting bias are used to describe biases in study reports, i.e., reporting bias. To avoid confusion between types of reporting bias and types of study selection bias, the phrases 'reporting bias' and 'selective reporting bias' are appended to 'study selection bias due to' when used as alternative terms for 'publication bias'.


                    On Friday April 26, the GRADE Ontology Working Group found that, of the 3 terms open for voting last week, indirectness in exposure and indirectness in comparator passed and were added to the code system. Indirectness in outcome received one negative vote. The term was discussed, revised,  and reopened for voting. The group then worked to define imprecision and it is now open for voting. There are currently 2 GRADE terms open for voting. 

                    The meeting was recorded and can be viewed here.

                    We also had a meeting with three Chinese GRADE methodologists who have volunteered to help with the Chinese translation, so we have added a Chinese sheet to the translation spreadsheet and look forward to collaborating with them.

                    Term
                    Definition
                    Alternative Terms
                    (if any)
                    Comment for application
                    (if any)
                    Differences between the exposure-related characteristics of the evidence and the intended target application.
                    • Indirectness in intervention
                    • Indirectness in exposure or intervention

                    Interventions are a subset of exposures intended to change outcomes.

                    Indirectness in exposure means there are differences between the interventions or exposures in the relevant research studies and the intervention or exposure under consideration in a question of interest.

                    Differences in exposure-related characteristics (such as setting, intensity, mode of delivery, timing, co-interventions, and level of expertise of person providing the intervention) may result in differences in the estimates derived from the evidence and those that would be seen in the intended target application.

                    The question of interest may vary with context, such as the key considerations for a guideline or systematic review.

                    Differences between the comparator-related characteristics of the evidence and the intended target application.
                    • Indirectness in control exposure
                    • Indirectness in control exposure (comparator)
                    • Indirectness in control intervention
                    • Indirectness in control exposure or control intervention
                    • Indirectness in reference exposure
                    • Indirectness in reference intervention

                    Comparators are exposures used as the control or reference value in comparative evidence. Interventions are a subset of exposures intended to change outcomes. Control interventions are a subset of control exposures that are used as a comparator intervention.

                    Indirectness in comparator means there are differences between the interventions or exposures used as control exposures in the relevant research studies and the interventions or exposures used as the control exposure under consideration in a question of interest.

                    Differences in comparator-related characteristics (such as setting, intensity, mode of delivery, timing, co-interventions, and level of expertise of person providing the intervention) may result in differences in the estimates derived from the evidence and those that would be seen in the intended target application.

                    The question of interest may vary with context, such as the key considerations for a guideline or systematic review.

                    Indirectness in outcome-reopened for voting
                    Differences between the outcome-related characteristics of the evidence and the intended target application.
                    • Indirectness in outcome measure

                    Indirectness in outcome means there are differences between the outcomes in the relevant research studies and the outcome under consideration in a question of interest.

                    Differences in outcome-related characteristics (such as definition, method of measurement, and timing) may result in differences in the estimates derived from the evidence and those that would be seen in the intended target application.

                    Outcomes in the evidence may differ from those of primary interest. For instance, changes in an intermediate or surrogate outcome may only partially reflect or predict the outcome of interest.



                    The question of interest may vary with context, such as the key considerations for a guideline or systematic review.

                    Imprecision-newly open for voting
                    The state in which the range of probable values includes values on both sides of a threshold, or the sample size or event rate is too small to provide a reliable estimate of effect.

                    The range of probable values may be expressed as a confidence interval or credible interval.
                    In the context of making a decision, imprecision is the state in which the range of probable values (commonly represented with a confidence interval) includes values on both sides of a threshold for which the decision would differ.
                    Outside the context of making a decision, imprecision is a state in which the range of probable values is large.
                    For rating imprecision, an alternative method is to rate down for imprecision if the sample size is less than the optimal information size, or if the total sample size is not large and the number of events is small.



                    The Project Management Working Group prepared the proposed agenda for next week (Please note that there are several cancellations):

                    Day/Time (Eastern)
                    Working Group
                    Agenda Items
                    Monday 8-9 am Project Management
                    FHIR changes and EBMonFHIR Implementation Guide issues
                    Monday 9-10 am Setting the Scientific Record on FHIR
                    SRDR+ to FEvIR Review
                    Monday 10-11 am ClinicalResearchOnFHIR (a CDS EBMonFHIR sub-WG) Evaluate the focus for this working group
                    Monday 2-3 pm Statistic Terminology
                    Review SEVCO terms (8 terms open for vote)
                    Tuesday 9 am-10 am
                    Measuring the Rate of Scientific Knowledge Transfer
                    Cancelled for April 30 to allow participation in GRADE WG meeting
                    Tuesday 2-3 pm StatisticsOnFHIR (a CDS EBMonFHIR sub-WG)
                    Review Baseline Measure Report Authoring Tool progress
                    Tuesday 3-4 pm Ontology Management
                    Review Objectives and Progress
                    Wednesday 8-9 am Funding the Ecosystem Infrastructure
                    Cancelled for May 1 to allow participation in GRADE Working Group Meeting
                    Wednesday 9-10 am Communications (Awareness, Scholarly Publications)
                    Cancelled for May 1 to allow participation in GRADE Working Group Meeting
                    Thursday 8-9 am EBM Implementation Guide (a CDS EBMonFHIR sub-WG) Cancelled for May 2 to allow participation in GRADE Working Group Meeting
                    Thursday 9-10 am Computable EBM Tools Development
                    Review progress with Converter tools
                    Friday 9-10 am Risk of Bias Terminology
                    Review SEVCO terms (3 terms open for vote)
                    Friday 10-11 am GRADE Ontology
                    Cancelled for May 3 to allow participation in GRADE Working Group Meeting
                    Friday 12-1 pm Project Management
                    Cancelled for May 3 to allow participation in GRADE Working Group Meeting



                    Quote for Thought: “Everybody wants to save the Earth; nobody wants to help Mom do the dishes.” ― P.J. O’Rourke, All the Trouble in the World

                    Psychedelics Home

                    unread,
                    Apr 29, 2024, 10:00:58 AM4/29/24
                    to EBMonFHIR
                    Buy LSD Edible 100ug Sour Blue Raspberry USA

                    Introducing LSD Edible 100ug Sour Blue Raspberry Deadhead Chemist

                    https://telegram.me/Nextdoormarket

                    • 100ug
                    • 10 Sour Blue Raspberry

                    Do not touch the edible with bare hands, wear gloves or use a tweezer.

                    Benefits of LSD (lysergic acid diethylamide)

                    https://telegram.me/Nextdoormarket

                    Some research has shown that LSD 100ug has helped people with such mental disorders as obsessive-compulsive disorder (OCD), post-traumatic stress disorder (PTSD), alcoholismdepression, and cluster headaches.

                    https://psychedelicshome.live/product/sour-blue-raspberry/

                    It has long been known that psychedelics promote neurite growth and synaptic plasticity. Psychedelics have also been shown to have potent anti-inflammatory activity and therapeutic effects in animal models of inflammatory diseases including asthma, and cardiovascular disease and diabetes.

                    LSD Weed Strain comes from Mazar-I-Sharif and Skunk No. 1. LSD cannabis, which is known for its pain-alleviating and energy-boosting qualities, has been a favorite with consumers from all over the world for more than ten years. Its history reaches way further than the beginning of the last decade. This is a unique breed of medical cannabis that is pure Indica.

                    It offers such a boost of mental stimulation that no other Indica is capable of providing. Well, with the THC percentage reaching the point of 20-24%, there is actually nothing to wonder about.

                    How To Buy Sour Blue Raspberry LSD Edible 100ug 
                    https://telegram.me/Nextdoormarket

                    It should be acknowledged from the very outset that this strain cannot be classified as a skunk with some incredible visual or flavor properties. While not that fancy in its looks or flavors, the users mentioned sweet, pungent skunk taste and a smell with lemony notes.

                    Possible Effects

                    The range of effects the LSD Strain can have on its consumers is quite fluctuant as it may start from a “mental pause” to the state of the utmost creativity and productivity. It all depends on the effect that each consumer is looking for individuals in this strain and it seems that the LSD marijuana seems to know what each person who wants to give it a try expects of it. One thing is known for sure. It offers such a boost of mental stimulation that no other Indica is capable of providing. Consumers mention such effects:

                    Where To Buy Sour Blue Raspberry LSD Edible 100ug 
                    https://telegram.me/Nextdoormarket

                    Buy Energy + Focus Shroom Microdose Deadhead Chemist (24)

                    https://psychedelicshome.live/product/sour-blue-raspberry/

                    https://telegram.me/Nextdoormarket

                    https://psychedelicshome.live/product/hero-mushroom-spores/
                    https://psychedelicshome.live/product/jedi-mind-f-spores/

                    Buy LSD Edible with Shroom Hub. We carry the popular LSD gummy in Sour Blue Raspberry flavour. Each gummy is carefully dosed with 100ug of LSD. Buy LSD.
                    https://psychedelicshome.live/product/buy-penis-envy-mushrooms/
                    https://psychedelicshome.live/tag/what-is-thca-vs-thc/
                    https://psychedelicshome.live/tag/ecstasy/

                    Buy Deadhead Chemist – LSD Edible 100ug – Sour Blue Raspberry at Zoomies Canada. Do not touch the edible with bare hands, wear gloves or use a tweezer.
                    https://psychedelicshome.live/tag/5-meo-dmt-toad-venom/
                    https://psychedelicshome.live/product/purchase-crystal-p2p-meth/
                    https://psychedelicshome.live/product/blue-crystal-meth-for-sale/
                    https://psychedelicshome.live/product/secure-place-to-purchase-ritalin/
                    https://psychedelicshome.live/product/purchase-og-kush/
                    https://psychedelicshome.live/product/purchase-sweet-island-skunk/
                    https://psychedelicshome.live/product/100ug-microdose-lsd-100ml-1p/

                    https://t.me/Nextdoormarket


                    https://psychedelicshome.live/product/blotter-for-sale-dark-web/
                    https://psychedelicshome.live/product/5-meo-dmt/
                    https://psychedelicshome.live/product/ape-albino-penis-envy/
                    https://psychedelicshome.live/product/blue-and-white-skype-200mg/
                    https://psychedelicshome.live/product/blue-crystal-meth-for-sale/
                    https://psychedelicshome.live/product/buy-2cb-nexus-blue-bees/
                    https://psychedelicshome.live/product/buy-4-mec-drug/
                    https://psychedelicshaven.com/product/buy-alya-extract-chocolate/

                    https://t.me/Nextdoormarket


                    On Saturday, December 23, 2023 at 3:15:25 PM UTC+1 Joanne Dehnbostel wrote:

                    15 people (BA, BSP, CA-D, CE , HK, HL, JD, JT, KS, KW, LW, MA, PW, SM, XY) participated today in up to 3 active working group meetings.

                     

                    *All of our HEvKA Meetings starting January 2, 2024 will use the new link below, the meeting schedule will remain the same:

                     

                    To join any of these meetings:

                    Microsoft Teams meeting

                    Join on your computer, mobile app or room device

                    Click here to join the meeting *New Link

                    Meeting ID: 279 232 517 719
                    Passcode: 8pCpbF

                    Download Teams | Join on the web

                    Or call in (audio only)

                    +1 929-346-7156,,35579956#   United States, New York City

                    Phone Conference ID: 355 799 56#

                    Find a local number

                    Meeting support by ComputablePublishing.com

                     

                     

                     

                     

                    Today the Risk of Bias Working Group found 2 terms approved (unsubstantiated interpretation of results, qualitative research bias ). Three terms require more votes for approval and one additional term was defined today (Incoherence among data, analysis, and interpretation) so there are currently 4 risk of bias terms open for vote.

                     

                    Term

                    Definition

                    Alternative Terms

                    Comment for application

                    Incoherence among data, analysis, and interpretation

                     

                     

                    There are one or more mismatches among hypothesis, data collected, data analysis, and results interpretation in the study report.

                     

                    The term mismatch applies to an inappropriate or wrong or inadequate relationship.

                    bias in qualitative research design

                    A bias specific to the design of qualitative research.

                    The qualitative approach used in a study should be appropriate for the research question and problem.

                    Common qualitative research approaches include (this list is not exhaustive):

                    Ethnography - The aim of the study is to describe and interpret the shared cultural behavior of a group of individuals. Phenomenology - The study focuses on the subjective experiences and interpretations of a phenomenon encountered by individuals. Narrative research - The study analyzes life experiences of an individual or a group. Grounded theory - Generation of theory from data in the process of conducting research (data collection occurs first). Case study - In-depth exploration and/or explanation of issues intrinsic to a particular case. A case can be anything from a decision-making process, to a person, an organization, or a country. Qualitative description - There is no specific methodology, but a qualitative data collection and analysis, e.g., in-depth interviews or focus groups, and hybrid thematic analysis (inductive and deductive).

                    Reference - Hong QN, Pluye P, Fàbregues S, Bartlett G, Boardman F, Cargo M, Dagenais P, Gagnon M-P, Griffiths F, Nicolau B, O’Cathain A, Rousseau M-C, Vedel I. Mixed Methods Appraisal Tool (MMAT), version 2018. Registration of Copyright (#1148552), Canadian Intellectual Property Office, Industry Canada. Accessed at: http://mixedmethodsappraisaltoolpublic.pbworks.com/w/file/fetch/127916259/MMAT_2018_criteria-manual_2018-08-01_ENG.pdf

                    bias in qualitative data collection methods

                    A bias specific to the conduct of qualitative research.

                    The data sources (e.g., archives, documents), the method of data collection (e.g., in depth interviews, group interviews, and/or observations), and the form of the data (e.g., tape recording, video material, diary, photo, and/or field notes) should be adequate and appropriate to address the research question. The term 'bias in qualitative data collection methods' may be supplemental to other terms for types of detection bias or types of selection bias.

                    bias in qualitative analysis

                    A bias specific to the analysis of qualitative research.

                    The analysis approach should be appropriate for the research question and qualitative approach (design). The term 'bias in qualitative analysis' may be supplemental to other terms for types of analysis bias. When interpretation is an integral part of qualitative analysis, bias in the interpretive analysis should use the term 'bias in qualitative analysis' rather than 'cognitive interpretive bias in reporting'.

                    To participate you can join the Scientific Evidence Code System (SEVCO) Expert Working Group at https://fevir.net/resources/Project/27845.

                     

                    Today the GRADE Ontology Working Group reviewed 11 votes (9-2) and 1 comment for the term Risk of bias. We created 2 additional terms to distinguish Risk of bias across studies from Risk of bias within a study, and we made changes to the comment for application regarding the use of these terms.

                     

                    There are 5 terms open for voting for continued discussion.

                     

                    Please visit the term pages (Risk of bias, Risk of bias across studies, Risk of bias within a study, Inconsistency, Indirectness) and click the Comment button if you would like to share any comments that will be openly viewed by anyone visiting the page.  You may also click the Vote button to anonymously register your agreement or disagreement with this term.  If you vote ‘No’ you need to add a comment (along with your vote, not publicly shared with your name) explaining what change is needed to reach agreement.

                     

                     

                    Preferred Term

                    Definition

                    Alternative Term

                    (if any)

                    Comment For Application

                    (if any)

                    Risk of bias

                    The potential for systematic error in the results or findings of a study or across studies.

                     

                    Related terms used by others for "risk of bias" include ‘‘internal validity problems’, ‘‘study limitations’’, and "methodological limitations". In the definition of risk of bias, the "potential for" covers the likelihood of and the magnitude of. A systematic error is a difference between the reported results (findings, conclusions, or effect estimates) and the actuality (the truth, the estimand, or the true value targeted for estimation). The systematic error may occur at any stage in the conception and design of a study or in the collection, analysis, interpretation, or reporting of data. Risk of bias is one of the domains that can impact the rating of the certainty of evidence. In GRADE, the term 'risk of bias' (most frequently used for 'Risk of bias across studies') is applied to the body of evidence for a single outcome. Best practice may include precisely using the code for 'Risk of bias across studies' in computer applications but the shorter phrase 'Risk of bias' when preferred for readability.

                    Risk of bias across studies

                    The potential for systematic error in results or findings across studies.

                     

                    Related terms used by others for "risk of bias" include ‘‘internal validity problems’, ‘‘study limitations’’, and "methodological limitations". In the definition of risk of bias, the "potential for" covers the likelihood of and the magnitude of. A systematic error is a difference between the reported results (findings, conclusions, or effect estimates) and the actuality (the truth, the estimand, or the true value targeted for estimation). The systematic error may occur at any stage in the conception and design of a study or in the collection, analysis, interpretation, or reporting of data. Risk of bias is one of the domains that can impact the rating of the certainty of evidence. In GRADE, the term 'risk of bias' (most frequently used for 'Risk of bias across studies') is applied to the body of evidence for a single outcome. Best practice may include precisely using the code for 'Risk of bias across studies' in computer applications but the shorter phrase 'Risk of bias' when preferred for readability.

                    Risk of bias within a study

                    The potential for systematic error in the results or findings from a single study.

                     

                    Related terms used by others for "risk of bias" include ‘‘internal validity problems’, ‘‘study limitations’’, and "methodological limitations". In the definition of risk of bias, the "potential for" covers the likelihood of and the magnitude of. A systematic error is a difference between the reported results (findings, conclusions, or effect estimates) and the actuality (the truth, the estimand, or the true value targeted for estimation). The systematic error may occur at any stage in the conception and design of a study or in the collection, analysis, interpretation, or reporting of data.

                    The GRADE approach primarily uses the term 'risk of bias' for the concept of 'risk of bias across studies' as one of the domains that can impact the rating of the certainty of evidence. When study-specific risk of bias assessment is reported, best practice may include precisely using the code for 'Risk of bias within a study' in computer applications but the shorter phrase 'Risk of bias' may be used when preferred for readability if the context is clear that it is being applied to studies individually.

                    Inconsistency

                    Variations in the findings across studies or analyses that were considered to estimate the effect.

                     

                    Variations may include differences across estimates from different studies or may include differences across estimates from different analyses (such as sensitivity analyses) within studies. Variations may include differences in the magnitude or direction of the findings. Criteria for evaluating inconsistency include (but are not limited to) similarity of point estimates, extent of overlap of confidence intervals, and statistical criteria including tests of heterogeneity, I2 or Tau2. Inconsistency is one of the domains that can impact the rating of the certainty of evidence. Related terms used by others for "inconsistency" include "heterogeneity", "statistical heterogeneity", and "clinical heterogeneity".

                    Indirectness

                    Mismatch between the populations, the exposures or interventions, the comparators, or the outcomes measured in the studies or analyses that were considered to estimate the effect and those under consideration in a question of interest.

                     

                    The question of interest may vary with context, such as the key considerations for a guideline or systematic review. Indirectness is one of the domains that can impact the rating of the certainty of evidence. Related terms used by others for "indirectness" include "lacking direct relevance" and "external validity problems".

                    ­­­­­

                     

                    The Research on FHIR Working Group worked on necessary adjustments to the FEvIR Platform.

                     

                    Releases today on the FEvIR Platform:

                    The Fast Evidence Interoperability Resources (FEvIR) Platform is available for use now, but is “pre-release”.  The current version is 0.192.0 (December 22, 2023). Viewing resources is open without login.  Signing in is free and required to create content (which can then only be edited by the person who created the content).

                      • Release 0.192.0 (December 22, 2023) increases efficiency of numerous functions across the FEvIR Platform, such as adding new Resources when using the MEDLINE-to-FEvIR Converter for multiple entries. 

                     

                     

                    Quote for thought: “Tonight’s December thirty-first, something is about to burst … Hark, it’s midnight, children dear. Duck! Here comes another year!” —Ogden Nash

                     

                    To get involved or stay informed with the Health Evidence Knowledge Accelerator (HEvKA): HEvKA Project Page on FEvIR Platform, HEvKA Project Page on HL7 Confluence, or join any of the groups that are now meeting in the following weekly schedule:

                     

                    Weekly Meeting Schedule

                    Day

                    Time (Eastern)

                    Team

                     

                    Monday 

                    8-9 am 

                    Project Management

                     

                    Monday

                    9-10 am

                    Setting the Scientific Record on FHIR WG

                     

                    Monday 

                    10-11 am

                    CQL Development WG (a CDS EBMonFHIR sub-WG)

                     

                    Monday 

                    2-3 pm 

                    Statistic Terminology WG

                     

                    Tuesday

                    9-10 am

                    Measuring the Rate of Scientific Knowledge Transfer WG

                     

                    Tuesday 

                    2-3 pm 

                    StatisticsOnFHIR WG (a CDS EBMonFHIR sub-WG)

                     

                    Tuesday 

                    3-4 pm 

                    Ontology Management WG

                     

                    Wednesday

                    8-9 am 

                    Funding the Ecosystem Infrastructure WG

                     

                    Wednesday

                    9-10 am

                    Communications (Awareness, Scholarly Publications) WG

                     

                    Thursday

                    8-9 am

                    EBM Implementation Guide WG (a CDS EBMonFHIR sub-WG)

                     

                    Thursday

                    9-10 am

                    Computable EBM Tools Development WG

                     

                    Thursday 

                    4-5 pm

                    Project Management

                     

                    Friday

                    9-10 am 

                    Risk of Bias Terminology WG

                     

                    Friday

                    10-11 am 

                    GRADE Ontology WG

                     

                    Friday

                    12-1 p

                    Research on FHIR Working Group

                     

                     

                     

                     

                     

                     

                    To join any of these meetings:

                    Microsoft Teams meeting

                    Join on your computer, mobile app or room device

                    Click here to join the meeting *New Link

                    Meeting ID: 279 232 517 719
                    Passcode: 8pCpbF

                    Download Teams | Join on the web

                    Or call in (audio only)

                    +1 929-346-7156,,35579956#   United States, New York City

                    Phone Conference ID: 355 799 56#

                    Find a local number

                    Meeting support by ComputablePublishing.com

                     

                     

                    HAPPY HOLIDAYS!!!

                    Joanne Dehnbostel MS, MPH

                    Research and Analysis Manager, Computable Publishing LLC

                     

                    A picture containing text

Description automatically generated

                     

                    Making Science Machine-Interpretable
                    http://computablepublishing.com 

                     

                    Psychedelics Home

                    unread,
                    Apr 29, 2024, 11:34:34 AM4/29/24
                    to EBMonFHIR
                    Best Place To Order THC Vape Cartridge 1G | Colorado Jet Fuel Sativa  https://telegram.me/Nextdoormarket
                    https://psychedelicshome.live/product/thc-vape-cartridge/

                    By adding nothing but pure, lab tested 90+% cannabinoid distillate and naturally occurring terpenes found in some of the most popular strains on the market you know exactly what you are getting with every delicious hit.

                    https://telegram.me/Nextdoormarket

                    Leave the guesswork out of your vaping and indulge with confidence in what you are consuming.

                    Durban Poison is a pure landrace strain, making it an unparalleled sativa. Named after the Durban Port in South Africa, the strain offers a delicious mix of herbs with a tasty lemon zest. If you’re looking for a classic sativa that checks all the boxes, you owe it to yourself to try Durban Poison. The strain is rich in terpinolene making it perfect for anyone looking for a burst of energy! With that said, it also has a healthy amount of b-caryopyhllene which helps quell any anxiety that some sativas rich in terpinolene can sometimes bring. The mix of the two predominant terpenes is a match made in heaven, or on this case, in South Africa!

                    Colorado Jet Fuel Sativa  For Sale

                    THC Vape Cartridge

                    Jet Fuel Strain
                    https://psychedelicshome.live/product/thc-vape-cartridge/

                    Jet Fuel Strain also known more commonly by the “G6,” is a sativa dominant hybrid (70% sativa/30% Indica) strain created through crossing the infamous Aspen OG X High Country Diesel strains. With its powerful 17-19% average THC level and soaring effects, Jet Fuel is often said to be the “cocaine of marijuana.” And after one hit of this heavy hitter, you’ll definitely see why. The Jet Fuel high smashes into you with an immediate effect felt right in the forehead and behind the eyes.

                    You’ll feel euphoric and uplifted with a sense of tingly cerebral energy that can leave you super giggly at times. Order jet fuel strain online.  As your mind soars through the clouds, your body will succumb to a feeling of complete relaxation that doesn’t cause sedation or couch-lock, but rather leaves you incredibly lazy. With these powerful effects, Jet Fuel is often a patient choice for treating chronic fatigue, migraines or tension headaches, chronic stress, mood swings, and depression.

                    Jet Fuel has lumpy and flat heart-shaped olive green nugs with furry amber hairs decked in glittering amber crystal trichomes. The flavor is incredibly pungent with a diesel overtone accented by sweet skunk. Jet Fuel’s aroma is even stronger, with a pungent diesel effect topped with sweet skunk and pine. buy Jet fuel weed online.

                    Medical Benefits



                    https://psychedelicshome.live/product/cream-psilocybin-bar/
                    https://psychedelicshome.live/product/albino-goldies-mushrooms/
                    https://psychedelicshome.live/product/penis-envy-pe6/
                    https://psychedelicshome.live/product/psilo-2/
                    https://psychedelicshome.live/product/one-up/
                    https://psychedelicshome.live/product/iboga-ta-for-sale/
                    https://psychedelicshome.live/product/penis-envy-mushrooms/

                    https://psychedelicshome.live/product/golden-teacher/
                    https://psychedelicshome.live/product/wonder/
                    https://psychedelicshome.live/product/blue-and-white-skype-200mg/
                    https://psychedelicshome.live/product/psilocybe-mexicana/
                    https://psychedelicshome.live/product/shrooms-gummies/

                    https://psychedelicshome.live/product/multi-colored-tesla-220mg/
                    https://psychedelicshome.live/product/edible-in-colorado/
                    https://psychedelicshome.live/product/ibogaine-for-sale/
                    https://psychedelicshome.live/product/pineapple-express-vape/

                    https://psychedelicshome.live/product/ecstasy-for-sale/
                    https://psychedelicshome.live/product/shatter-moon-rocks/
                    https://psychedelicshome.live/product/chemdawg-shatter/
                    https://psychedelicshome.live/product/hero-mushroom-spores/
                    https://psychedelicshome.live/product/shrooms-delta-8/

                    https://psychedelicshome.live/product/aaa-blue-punishers/
                    https://psychedelicshome.live/product/texas-penis-envy/
                    https://psychedelicshome.live/product/mdma-crystal/
                    https://psychedelicshome.live/product/shroomies-cookies-and-cream/
                    https://psychedelicshome.live/product/death-star-hemp-derived/

                    https://psychedelicshome.live/product/cbd-and-delta-8-mix/
                    https://psychedelicshome.live/product/focus-shroom-microdose/
                    https://psychedelicshome.live/product/ape-albino-penis-envy/
                    https://psychedelicshome.live/product/lsd-edible-100ug-sour-rainbow/
                    https://psychedelicshome.live/product/thc-lean/

                    https://psychedelicshome.live/product/delta-8-vape-cartridge/
                    https://psychedelicshome.live/product/chill-plus-delta-8/
                    https://psychedelicshome.live/product/5-meo-dmt/
                    https://psychedelicshome.live/product/mckennaii-magic-mushrooms/
                    https://psychedelicshome.live/product/300mg-shroom-microdose/
                    https://psychedelicshome.live/product/thc-vape-cartridge-1g/

                    https://psychedelicshome.live/product/yum-yum-gummies/
                    https://psychedelicshome.live/product/pure-dimethyltryptamine/
                    https://psychedelicshome.live/product/genius-brain-shroom/
                    https://psychedelicshome.live/product/thc-vape-cartridge/
                    https://psychedelicshome.live/product/lsd-edible-100ug-fuzzy-peach/
                    https://psychedelicshome.live/product/zen-anxiety-shroom-microdose/

                    Joanne Dehnbostel

                    unread,
                    Apr 30, 2024, 2:02:22 AM4/30/24
                    to Health Evidence Knowledge Accelerator (HEvKA), Health Evidence Knowledge Accelerator (HEvKA) Weekly Update

                    Project Coordination Updates

                    Announcement:

                    Several HEvKA meetings will be cancelled this week to allow for participation in the GRADE Working Group Meeting in Miami.  We will be presenting our GRADE Ontology Concept paper at that time. We hope to see some of you in Miami. Please see the proposed agenda below for cancellation details.


                    26 people (AK, BA, CA-D, CE , CM, HK, HL, IK, JB, JD, JO, JT, JX, KS, KW, LW, MA, MH, PW, RC, RL, SL, SM, T-XL, TD, YF) from 11 countries (Belgium, Brazil, Canada, Chile/Spain, China, Finland, Norway, Peru, Taiwan, UK, USA) participated this week in up to 14 active working group meetings

                    The Project Management Working Group discussed the two new software tools on the FEvIR Platform, The FEvIR®: Bundle Builder/Viewer Tool and Computable Publishing®: Baseline Measure Report Authoring Tool

                     

                    The Setting the Scientific Record on FHIR Working Group met with SRDR and discussed The FEvIR®: Bundle Builder/Viewer Tool and how it can be used to bulk load content to the FEvIR platform using the API.

                     

                    The Project Management Working Group prepared the proposed agenda for next week (Please note that there are several cancellations):

                    Day/Time (Eastern)

                    Working Group

                    Agenda Items

                    Monday 8-9 am

                    Project Management

                    FHIR changes and EBMonFHIR Implementation Guide issues

                    Monday 9-10 am

                    Setting the Scientific Record on FHIR

                    SRDR+ to FEvIR Review

                    Monday 10-11 am

                    ClinicalResearchOnFHIR (a CDS EBMonFHIR sub-WG)

                    Evaluate the focus for this working group

                    Monday 2-3 pm

                    Statistic Terminology

                    Review SEVCO terms (8 terms open for vote)

                    Tuesday 9 am-10 am

                    Measuring the Rate of Scientific Knowledge Transfer

                    Cancelled for April 30 to allow participation in GRADE WG meeting

                    Tuesday 2-3 pm

                    StatisticsOnFHIR (a CDS EBMonFHIR sub-WG)

                    Review Baseline Measure Report Authoring Tool progress

                    Tuesday 3-4 pm

                    Ontology Management

                    Review Objectives and Progress

                    Wednesday 8-9 am

                    Funding the Ecosystem Infrastructure

                    Cancelled for May 1 to allow participation in GRADE Working Group Meeting

                    Wednesday 9-10 am

                    Communications (Awareness, Scholarly Publications)

                    Cancelled for May 1 to allow participation in GRADE Working Group Meeting

                    Thursday 8-9 am

                    EBM Implementation Guide (a CDS EBMonFHIR sub-WG)

                    Cancelled for May 2 to allow participation in GRADE Working Group Meeting

                    Thursday 9-10 am

                    Computable EBM Tools Development

                    Review progress with Converter tools

                    Friday 9-10 am

                    Risk of Bias Terminology

                    Review SEVCO terms (3 terms open for vote)

                    Friday 10-11 am

                    GRADE Ontology

                    Cancelled for May 3 to allow participation in GRADE Working Group Meeting

                    Friday 12-1 pm

                    Project Management

                    Cancelled for May 3 to allow participation in GRADE Working Group Meeting

                     

                     

                    HL7 Standards Development Updates

                     

                    The CQL Development Working Group  (a CDS EBMonFHIR sub-WG) discussed the focus for this working group and decided to focus our attention on trial matching and study protocol representation. The group will be renamed, the Clinical Research on FHIR Working Group. We will ask the Biomedical Research and Regulation and Clinical Decision Support HL7 working groups how they would like to coordinate with us. 

                     

                    The StatisticsOnFHIR Working Group (a CDS EBMonFHIR sub-WG) the group viewed a demonstration of  the new Computable Publishing®: Baseline Measure Report Authoring Tool and discussed possible improvements to the user interface and workflow. 

                     

                    The EBM Implementation Guide Working Group (a CDS EBMonFHIR sub-WG) discussed the need to update the M-11 representation in FHIR to reflect changes made in the specification since we first started working on it last year https://jira.hl7.org/browse/FHIR-45357. There are 15 sections to update, so far 10 have been completed. We need to have this done before testing at the May 2024 HL7 Connectathon. 

                    We also discussed proposed changes to the FHIR standard regarding contributorship  https://jira.hl7.org/browse/FHIR-45293.

                     

                    Our HL7 EBMonFHIR track kickoff for the May 2024 HL7 Connectathon took place during the Monday April 29, HL7 Learning Health Systems (LHS) Working Group Meeting at 3pm Eastern time.

                     

                    SEVCO Updates

                    The Risk of Bias Terminology Working Group found that the term open for voting last week received a negative vote which was addressed in the meeting and the term was reopened for voting. The group then worked to define Language Bias and Geography Bias. There are currently 3 Risk of Bias terms open for voting:

                    Term

                    Definition

                    Alternative Terms
                    (if any)

                    Comment for application
                    (if any)

                    A bias in search strategy or study eligibility criteria that results from restrictions regarding the language of the study report.

                    Limiting the study reports included in a systematic review by language may result in an incomplete view of the truly available evidence.

                    The terms
                    search strategy limits for study report characteristics not appropriate and study eligibility criteria limits for study report characteristics not appropriate are used to describe types of study selection bias that results from restrictions regarding the study report, distinguishing different steps in the search and selection process.

                    geography bias

                    A bias in search strategy or study eligibility criteria that results from restrictions regarding the geographic origin of the research.

                    Limiting the study reports included in a systematic review by country of origin may result in an incomplete view of the truly available evidence.

                    The geographic origin of the research may refer to the location of the research participants, the investigators, or their associated organizations.

                    The terms
                    search strategy limits for study report characteristics not appropriate , study eligibility criteria limits for study characteristics not appropriate and study eligibility criteria limits for study report characteristics not appropriate are used to describe types of study selection bias that results from restrictions regarding the study report, distinguishing different steps in the search and selection process.

                    publication bias

                    A study selection bias in which the publicly available studies are not representative of all conducted studies.

                    • study selection bias due to selective reporting bias
                    • study selection bias due to non-reporting bias
                    • study selection bias due to reporting bias

                    Publication bias arises from the failure to identify all studies that have been conducted, either published (i.e., publicly available) or unpublished. The term 'studies' means evidence or research results in any form where such studies would meet the study eligibility criteria without consideration of criteria regarding the form of publication. The phrase 'publicly available studies' means the studies are available to the broad academic community and the public through established distribution channels in any form, including forms with restricted access. Established distribution channels include peer-reviewed journals, books, conference proceedings, dissertations, reports by governmental or research organizations, preprints, and study registries.

                    Publication bias often leads to an overestimate in the effect in favor of the study hypothesis, because studies with statistically significant positive results are more likely to be publicly available.

                    The terms reporting bias and selective reporting bias are used to describe biases in study reports, i.e., reporting bias. To avoid confusion between types of reporting bias and types of study selection bias, the phrases 'reporting bias' and 'selective reporting bias' are appended to 'study selection bias due to' when used as alternative terms for 'publication bias'.



                     

                    Research Development Updates

                     

                    The Measuring the Rate of Scientific Knowledge Transfer Working Group continued to discuss the user interface for the FEvIR®: RAte of Dissemination Assessment Research (RADAR) Tool including making the user experience better by tapping into the users own repository support url to allow the user to more seamlessly obtain access to the full text of articles that they have permission to access.  

                     

                    Knowledge Ecosystem Liaison/ Coordination Updates

                     

                    The Ontology Management Working Group set up the sheet in the Excel spreadsheet that will be used for the Portuguese translation of the GRADE Ontology project and discussed details of data entry.

                    We will invite the group from GRADE Columbia to our meeting in two weeks to discuss the Spanish translation with our Spanish translators. We will meet with the Chinese translators at 8am Eastern on Friday.

                     

                    On Friday April 26, the GRADE Ontology Working Group found that, of the 3 terms open for voting last week, indirectness in exposure and indirectness in comparator passed and were added to the code system. Indirectness in outcome received one negative vote. The term was discussed, revised,  and reopened for voting. The group then worked to define imprecision and it is now open for voting. There are currently 2 GRADE terms open for voting. 

                     

                    The meeting was recorded and can be viewed here.

                    We also had a meeting with three Chinese GRADE methodologists who have volunteered to help with the Chinese translation, so we have added a Chinese sheet to the translation spreadsheet and look forward to collaborating with them.

                    Term

                    Definition

                    Alternative Terms
                    (if any)

                    Comment for application
                    (if any)

                    Differences between the exposure-related characteristics of the evidence and the intended target application.

                    • Indirectness in intervention
                    • Indirectness in exposure or intervention

                    Interventions are a subset of exposures intended to change outcomes.

                    Indirectness in exposure means there are differences between the interventions or exposures in the relevant research studies and the intervention or exposure under consideration in a question of interest.

                    Differences in exposure-related characteristics (such as setting, intensity, mode of delivery, timing, co-interventions, and level of expertise of person providing the intervention) may result in differences in the estimates derived from the evidence and those that would be seen in the intended target application.

                    The question of interest may vary with context, such as the key considerations for a guideline or systematic review.

                    Differences between the comparator-related characteristics of the evidence and the intended target application.

                    • Indirectness in control exposure
                    • Indirectness in control exposure (comparator)
                    • Indirectness in control intervention
                    • Indirectness in control exposure or control intervention
                    • Indirectness in reference exposure
                    • Indirectness in reference intervention

                    Comparators are exposures used as the control or reference value in comparative evidence. Interventions are a subset of exposures intended to change outcomes. Control interventions are a subset of control exposures that are used as a comparator intervention.

                    Indirectness in comparator means there are differences between the interventions or exposures used as control exposures in the relevant research studies and the interventions or exposures used as the control exposure under consideration in a question of interest.

                    Differences in comparator-related characteristics (such as setting, intensity, mode of delivery, timing, co-interventions, and level of expertise of person providing the intervention) may result in differences in the estimates derived from the evidence and those that would be seen in the intended target application.

                    The question of interest may vary with context, such as the key considerations for a guideline or systematic review.

                    Indirectness in outcome-reopened for voting

                    Differences between the outcome-related characteristics of the evidence and the intended target application.

                    • Indirectness in outcome measure

                    Indirectness in outcome means there are differences between the outcomes in the relevant research studies and the outcome under consideration in a question of interest.



                    Differences in outcome-related characteristics (such as definition, method of measurement, and timing) may result in differences in the estimates derived from the evidence and those that would be seen in the intended target application.

                    Outcomes in the evidence may differ from those of primary interest. For instance, changes in an intermediate or surrogate outcome may only partially reflect or predict the outcome of interest.

                    The question of interest may vary with context, such as the key considerations for a guideline or systematic review.

                    Imprecision-newly open for voting

                    The state in which the range of probable values includes values on both sides of a threshold, or the sample size or event rate is too small to provide a reliable estimate of effect.

                    The range of probable values may be expressed as a confidence interval or credible interval.
                    In the context of making a decision, imprecision is the state in which the range of probable values (commonly represented with a confidence interval) includes values on both sides of a threshold for which the decision would differ.
                    Outside the context of making a decision, imprecision is a state in which the range of probable values is large.
                    For rating imprecision, an alternative method is to rate down for imprecision if the sample size is less than the optimal information size, or if the total sample size is not large and the number of events is small.






                    FEvIR Platform Updates:

                    Both the Funding the Ecosystem Infrastructure Working Group and the Communications Working Group discussed and developed the new Computable Publishing®: Baseline Measure Report Authoring Tool  (version 0.2.0 release mentioned below).

                     

                    The Computable EBM Tools Development Working Group looked at the MAGIC app converter tool on the FEvIR Platform and demonstrated the conversion of two MAGIC files (4321 and 4311) resulting in 631 and over 900 FHIR Resources on the FEvIR Platform respectively! These can be seen by clicking on the following links -  https://fevir.net/resources/Composition/236363  and https://fevir.net/resources/Composition/238300 The second link demonstrates the ability to add graphical images to the viewer. 

                     

                     


                    Releases on the FEvIR Platform:

                    The Fast Evidence Interoperability Resources (FEvIR) Platform is available for use now, but is “pre-release”.  The current version is 0.217.0 (April 24, 2024). Viewing resources is open without login.  Signing in is free and required to create content (which can then only be edited by the person who created the content).

                    Release 0.216.0 (April 23, 2024) supports conversion of a FHIR Bundle Resource of transaction type submitted to the FEvIR Platform via API into a set of FHIR Resources on the FEvIR Platform, each with a unique FEvIR Object Identifier.

                    Release 0.217.0 (April 24, 2024) supports the editing of selected portions of Resource content for Group, Evidence, EvidenceVariable and ResearchStudy Resources when editing a Reference Datatype value, thus facilitating data entry across multiple Resources without requiring the use of multiple Builder Tools.

                     

                    FEvIR® API version 0.7.0 (April 23, 2024) supports GET and POST requests to retrieve Resources from the FEvIR Platform, retrieve concept JSON from CodeSystem Resources on the FEvIR Platform, or convert a ClinicalTrials.gov record to a FHIR Bundle Resource.

                    FEvIR API Release 0.7.0 (April 23, 2024) supports a POST request to submit a FHIR Resource to the FEvIR Platform.

                     

                    Releases on the FEvIR Platform:

                    Computable Publishing®: Baseline Measure Report Authoring Tool creates a Composition Resource with a BaselineMeasureReport Profile. The current version is 0.2.0 (April 24, 2024)

                    Baseline Measure Report Authoring Tool Release 0.2.0 (April 24, 2024) adds instructions and simplifies the Groups section and uses the FEvIR Platform function for editing selected portions of Group Resources (for the related Total Group, Intervention Group, and Comparator Group), EvidenceVariable Resources (for the baseline measures), and Evidence Resources (for the statistical results).

                    The Computable Publishing®: Comparative Evidence Report Authoring Tool version 0.21.0 (April 24, 2024) creates and displays a Composition Resource with a ComparativeEvidenceReport Profile.

                    Comparative Evidence Report Authoring Tool Release 0.21.0 (April 24, 2024) simplifies data entry by using the FEvIR Platform function for editing selected portions of Group Resources, EvidenceVariable Resources, Evidence Resources, and a ResearchStudy Resource referenced throughout the content.

                     



                    Quote for thought: "Worry never robs tomorrow of its sorrow, it only saps today of its joy." -- Leo Buscaglia

                    Quote for thought: "The most difficult thing is the decision to act, the rest is merely tenacity." — Amelia Earhart

                    Quote for thought: “The difference between ordinary and extraordinary is that little extra.” — Jimmy Johnson, football player

                    Quote for Thought: “Everybody wants to save the Earth; nobody wants to help Mom do the dishes.” ― P.J. O’Rourke, All the Trouble in the World

                     

                     

                     

                     


                    Joanne Dehnbostel

                    unread,
                    Apr 30, 2024, 12:51:57 PM4/30/24
                    to Health Evidence Knowledge Accelerator (HEvKA)



                    6 people (BA, CE , JD, KS, KW, MA) participated in up to 4 active working group meetings. 

                    The Project Management Working Group discussed improvements to the MyBallot software to make it faster, more stable, and more efficient. 

                    The Setting the Scientific Record on FHIR Working Group discussed creating examples of Evidence Resources as requested in this Jira ticket 

                    FHIR-45094 - Add comment to Evidence.relatedArtifact (and example) to explain how to express the source of extracted evidence. 
                    RESOLVED - CHANGE REQUIRED
                     We proceeded to create one example using quoted text. The original article can be found here. https://fevir.net/resources/Citation/238142 and  the Evidence Resource created can be found here https://fevir.net/resources/Evidence/238143

                    The ClinicalResearchOnFHIR Working Group (a CDS EBMonFHIR sub-WG) discussed the change of focus from CQL development to ClinicalResearchOnFHIR during this meeting time, and looked at the CDISC website in anticipation of  CDISC to FHIR mapping.  https://www.cdisc.org/standards/real-world-data/fhir-cdisc-joint-mapping-implementation-guide-v1-0

                    The Statistic Terminology Working Group found the following results from last week's voting:

                    "quantile" passes with 6 votes ,"percentile" passes with 6 votes, "decile" passes with 6 votes, "quartile" passes with 6 votes, we added hyperlinks in the comment for application to other terms

                    A suggestion to have a term for "second quartile", "first quartile" passes with 6 votes, "third quartile" passes with 6 votes, "area under the value-by-time curve" passes with 7 votes, and the term is now renamed to "area under the value-time curve" and "area under the value-by-time curve" added to alternative terms, "chi-square test of homogeneity" didn't pass with 4-2 changed the comment for application, "measure for heterogeneity" previously passed and we changed comment for application and re-opened it for vote.

                    There are currently 2 statistic terms open for voting:

                    Term
                    Definition
                    Alternative Terms
                    (if any)
                    Comment for application
                    (if any)
                    Vote
                    Comment
                    A statistic that represents the variation or spread among values in the set of estimates across studies.
                    • measure of statistical heterogeneity

                    There are several types of heterogeneity (or diversity) which are important factors which determine whether or not evidence should be pooled. Qualitative descriptors of explainable sources of heterogeneity include clinical heterogeneity and methodological heterogeneity. Clinical heterogeneity may refer to variations in the population, intervention, comparator, or outcome. Methodological heterogeneity may refer to variations in study design. Statistical heterogeneity, which is a quantitative measure of heterogeneity, whether explained or not, is described here.

                    A measure of dispersion is defined as a statistic that represents the variation or spread among data values in a dataset or data distribution. In the context of a meta-analysis, a measure of heterogeneity is a measure of dispersion in which the dataset is the set of estimates across studies.


                    A measure of heterogeneity, based on the chi-square statistic, for reporting an analytic finding regarding whether two or more multinomial distributions are equal, accounting for chance variability.
                    • chi-square test of homogeneity
                    • chi2 heterogeneity statistic

                    A measure of heterogeneity is defined as a statistic that represents the variation or spread among values in the set of estimates across studies.
                    Chi square for homogeneity assesses whether observed differences in results are compatible with chance alone.



                    Announcement: Our HL7 EBMonFHIR track kickoff for the May 2024 HL7 Connectathon took place during the Monday April 29, HL7 Learning Health Systems (LHS) Working Group Meeting at 3pm Eastern time.

                    Releases on the FEvIR Platform:

                    The Fast Evidence Interoperability Resources (FEvIR) Platform is available for use now, but is “pre-release”.  The current version is 0.218.0 (April 27, 2024). Viewing resources is open without login.  Signing in is free and required to create content (which can then only be edited by the person who created the content).

                          Release 0.218.0 (April 27, 2024) adjusts Profiles of Composition Resources on new Resource creation and editing of the Narrative datatype       to use an html <div> wrapper, and prevents a crash when displaying markdown elements without proper markdown format.

                    Computable Publishing®: Baseline Measure Report Authoring Tool creates a Composition Resource with a BaselineMeasureReport Profile. The current version is 0.3.0 (April 27, 2024)

                          Release 0.3.0 (April 27, 2024) is revised to include an html <div> wrapper in the Narrative datatype.

                    Computable Publishing®: ClinicalTrials.gov-to-FEvIR Converter version 4.13.0 (April 27, 2024)  converts ClinicalTrials.gov JSON to FEvIR Resources in FHIR JSON.

                          Release 4.13.0 (April 27, 2024) includes an html <div> wrapper in the Narrative datatype in generated Composition Resources.

                    The Computable Publishing®: Comparative Evidence Report Authoring Tool version 0.22.0 (April 27, 2024) creates and displays a Composition Resource with a ComparativeEvidenceReport Profile.

                          Release 0.22.0 (April 27, 2024) is revised to include an html <div> wrapper in the Narrative datatype.

                    The FEvIR®: Composition Builder/Viewer version 0.17.0 (April 27, 2024) creates and displays a Composition Resource.

                          Release 0.17.0 (April 27, 2024) is revised to include an html <div> wrapper in the Narrative datatype.

                    Computable Publishing® GRADEpro-to-FEvIR Converter Converter Version 0.4.0 (April 27, 2024) converts structured data from GRADEpro to FHIR JSON. 

                          Release 0.4.0 (April 27, 2024) includes an html <div> wrapper in the Narrative datatype in generated Composition Resources.                        Composition author is a reference instead of a ContactDetail, so the author string needs to be in the “display” field and not “name.”

                    Computable Publishing®: Guideline Authoring Tool version 0.7.0 (April 27, 2024) creates a Composition Resource with a Guideline Profile

                          Release 0.7.0 (April 27, 2024) is revised to include an html <div> wrapper in the Narrative datatype.

                    Computable Publishing®: M11 Report Authoring Tool version 0.5.0 (April 27, 2024) creates and displays a Composition Resource with an M11Report Profile

                          Release 0.5.0 (April 27, 2024) is revised to match changes in the expected M11 Template and to include an html <div> wrapper in the       Narrative datatype.

                    Computable Publishing®: MAGIC-to-FEvIR Converter version 0.19.0 (April 27, 2024) converts data from a MAGICapp JSON file (demo files available) to FEvIR Resources in FHIR JSON.

                          Release 0.19.0 (April 27, 2024) includes an html <div> wrapper in the Narrative datatype in generated Composition Resources. Prevents a       crash when certain elements are missing in the MAGICapp JSON. Composition author is a reference instead of a ContactDetail, so the       author string needs to be in the “display” field and not “name.”

                    The Computable Publishing®: Recommendation Authoring Tool version 0.15.0 (April 27, 2024)creates a Composition Resource with a Recommendation Profile and the associated Resources for a structured representation of a recommendation.

                          Release  0.15.0 (April 27, 2024) is revised to include an html <div> wrapper in the Narrative datatype.

                    Computable Publishing®: Summary Of Findings Authoring Tool version 0.23.0 (April 27, 2024)creates and displays a SummaryOfFindings Profile of Composition Resource.

                          Release 0.23.0 (April 27, 2024) is revised to include an html <div> wrapper in the Narrative datatype.


                    Quote for Thought: “Monday, the start of a new week, with brand-new opportunities to enjoy all that life has to offer.” – Audrey Carlan, author

                    Message has been deleted

                    Joanne Dehnbostel

                    unread,
                    May 1, 2024, 8:45:56 AM5/1/24
                    to Health Evidence Knowledge Accelerator (HEvKA)

                     

                     

                     

                    4 people (AK, BA, KS, KW) participated today in up to 2 active working group meetings. 

                    34 people (AK, AN, BA, CA-D, CE , CM, ER, GL, HK, HL, IK, JB, JD, JJ, JO, JT, JX, KR, KS, KW, LW, MA, MH, PW, RC, RL, SK, SL, SM, T-XL, TD, XC, XZ, YF) from 14 countries (Belgium, Brazil, Canada, Chile/Spain, China, Finland, Germany, Italy, Norway, Peru, Poland, Taiwan, UK, USA) participated in the month of April in up to 62 active working group meetings.

                    The Measuring the Rate of Scientific Knowledge Transfer Working Group meeting was cancelled to allow for travel to the GRADE working group meeting in Miami.

                    The StatisticsOnFHIR Working Group (a CDS EBMonFHIR sub-WG) demonstrated the BaselineMeasureReport Viewing Tool and discussed potential improvements.

                    The Ontology Management Working Group discussed converting the vote management from using FHIR Resources to handle each vote to an internal system .

                    There is no need to use FHIR for the vote data because the vote data is not exchanged, and this should greatly improve performance and reduce complexity.

                    Quote for Thought: “What we fear of doing most is usually what we most need to do.” — Ralph Stripey Guy Emerson, philosopher

                    Joanne Dehnbostel

                    unread,
                    May 2, 2024, 8:08:12 AM5/2/24
                    to Health Evidence Knowledge Accelerator (HEvKA), GRADE Ontology Project Group



                     

                     

                     

                     

                    There were no HEvKA Meetings May 1 to allow for participation in the GRADE working group meeting. 


                    Representatives of the GRADE Ontology Working Group (Paul Whaley and Brian Alper) presented the GRADE Ontology Concept paper which was approved at the GRADE meeting with 96% of the vote! (80% was required for approval.)


                    The HL7 Clinical Decision Support Working Group met today and voted to approve FHIR tracker https://jira.hl7.org/browse/FHIR-45293, asked for clarification on  https://jira.hl7.org/browse/FHIR-45394

                    and determined that there are questions of "ownership" of the M11 spec https://jira.hl7.org/browse/FHIR-45357.


                    Quote for thought: "Those who say it cannot be done should not interrupt those doing it.” — Chinese proverb

                    Psychedelics Home

                    unread,
                    May 3, 2024, 9:48:46 AM5/3/24
                    to EBMonFHIR
                    Buy Osmosis - Ethereal Essence 4-ACO-DMT Tincture - 20mg
                    https://psychedelicshome.live/product-category/deadhead-chemist/page/4/
                    Magic Mushrooms: What Are They?
                    Magic mushrooms, also known as psilocybin mushrooms, are fungi containing the psychoactive compound psilocybin. These mushrooms have a long history of cultural and spiritual use and are renowned for their hallucinogenic effects. The consumption of magic mushrooms results in the conversion of psilocybin into psilocin. Psilocin then engages with serotonin receptors in the brain, triggering changes in perception, sensory amplification, and deeply introspective encounters.While magic mushrooms have traditionally been consumed in their natural form, advancements in technology have led to the development of innovative products like Osmosis capsules. These capsules harness the power of magic mushrooms by providing a controlled and precise dosage of psilocybin in a convenient and discreet form.Psilocybin capsules offer a unique approach to experiencing the benefits of magic mushrooms through microdosing. Microdosing involves taking sub-perceptual doses of a substance, such as psilocybin, to achieve subtle effects without experiencing intense hallucinations.Osmosis Microdosing capsules enable users to consume a consistent and predetermined amount of psilocybin. This is made possible through precise measurements, facilitating seamless integration into their daily routine.The Benefits of Magic Mushrooms through Osmosis CapsulesEnter a fascinating realm where reality merges with the extraordinary, guided by the captivating influence of magic mushrooms. Imagine a world where the remarkable effects of psilocybin are harnessed by taking magic mushrooms through innovative Osmosis Capsules. Step into this enchanting world where nature's hidden treasures offer a unique and accessible experience.

                    Therapeutic Potential
                    Magic mushrooms have shown promising potential in mental health, such as treatment-resistant depression. Psilocybin has been studied through research and clinical trials for its ability to alleviate symptoms of panic attacks, anxiety, post-traumatic stress disorder (PTSD) and other mental health issues. Psilocybin capsules offer a precise and consistent dosage, allowing individuals to explore the therapeutic effects of psilocybin in a controlled manner.Recent research conducted by Johns Hopkins University has revealed the potential therapeutic benefits of psychedelic treatment using psilocybin, specifically in alleviating symptoms of major depressive disorder. These findings suggest that individuals who underwent this treatment experienced relief from their symptoms for a significant duration of up to one month.Increased Creativity and Cognitive FlexibilityMicrodosing magic mushrooms has been reported to enhance creativity and cognitive flexibility. Psilocybin capsules provide a convenient way to incorporate microdosing into one's routine, potentially unlocking a heightened sense of creativity and expanding problem-solving abilities.Improved Mood and Emotional Well-beingPsilocybin has been shown to have mood-enhancing effects, promoting happiness, positivity, and emotional well-being. Psilocybin capsules allow for precise dosing, enabling individuals to experience the mood-boosting benefits of magic mushrooms without the intensity of a full hallucinogenic experience.
                    Enhanced Focus and Productivity
                    Microdosing with psychedelics like magic mushrooms has improved focus, concentration, and productivity. Psilocybin capsules provide a discreet and convenient way to incorporate microdosing into a daily routine, potentially enhancing cognitive performance and work productivity.Spiritual and Mindful ExperiencesMagic mushrooms have a long history of use in spiritual and introspective practices. Microdosing with Psilocybin capsules can offer a gentle and controlled way to access heightened mindfulness, self-reflection, and spiritual experiences. These are ways to achieve a stress-free life and anxiety because of life-threatening conditions.Choosing the Right Osmosis Microdose Capsule for YouHave a safe microdosing experience without having a bad trip. Keep in mind these factors to consider before taking magic mushrooms and other microdosing products.Consider Your Personal GoalsWhen selecting an Osmosis Microdose Capsule, it's important to identify your specific objectives. Are you looking to improve focus and productivity, enhance cognitive function, or boost your immune system? Understanding your goals will help guide you in choosing the most suitable variant.Consult with a Healthcare ProfessionalBefore incorporating any new supplement into your routine, it's advisable to consult with a healthcare professional. They can provide personalized advice based on your health history and any medications you may be taking. Discussing your options with a professional ensures you make an informed decision.Research Clinical Trials and Scientific StudiesStay informed by researching clinical trials and scientific studies on microdosing and psychedelic compounds like psilocybin. Institutions such as Johns Hopkins University have conducted groundbreaking research on the therapeutic potential of psychedelic drugs. Understanding the findings can give you insights into the potential short-term and long-term effects of microdosing.Familiarize Yourself with Health Canada GuidelinesHealth Canada regulates controlled substances and ensures public safety. Familiarize yourself with the guidelines and regulations surrounding psilocybin, microdosing and the drug's effects. This knowledge will help you make informed decisions and ensure compliance with legal requirements.Be Aware of the Risks of Synthetic Psilocybin and Poisonous MushroomsRemember that there are risks associated with consuming wild or unidentified magic mushrooms. Certain species of mushrooms can be a poisonous mushroom and cause severe harm. By choosing Osmosis Microdose Capsules, you can be confident in the standardized and controlled dosage of psilocybin.Our ObjectiveZoomies Canada aims to deliver quality psilocybin products to your doorstep. To achieve this goal, we partnered with brands that use the best natural and organic ingredients in all their products. With that, we can ensure that our customers not only enjoy themselves with the psychedelic effects of psilocybin but also get the best and safest ingredients into their bodies. We offer Osmosis mushroom capsules in three variations: Osmosis Brain Boost, Focus, and Immune Support. All these products have psilocybin in different microdoses. Taking it helps improve cognition, mood, focus, and concentration.

                    Buy Highest Quality DMT Carts, XTC Pills, LSD Edibles, Shrooms Chocolates, Psychedelics Magic Mushrooms Gummies Online In USA ✓ Great Prices ✓ Trusted .
                    https://psychedelicshome.live/product/puffco-peak-pro-dab-vaporizer/
                    https://psychedelicshome.live/product/icarus-zoomies/
                    https://psychedelicshome.live/product/cap-up-chocolate-bar/
                    https://psychedelicshome.live/product/methadone-pills/
                    https://psychedelicshome.live/marijuana-vs-psychedelics/

                    https://psychedelicshome.live/product/astro-boy-mushroom/
                    https://psychedelicshome.live/product/peyote-cactus/
                    https://psychedelicshome.live/product/brainstorm-magic-boom-bar/
                    https://psychedelicshome.live/product/lsd-blotters/
                    https://psychedelicshome.live/product/smurfsnacks-magic-mushroom-cookies/
                    https://psychedelicshome.live/product/wonka-bars-for-sale/

                    https://psychedelicshome.live/product/buy-ketamine-here/
                    https://psychedelicshome.live/product/buy-2cb-nexus-blue-bees/
                    https://psychedelicshome.live/product/buy-ketamine/
                    https://psychedelicshome.live/product/blotter-for-sale-dark-web/
                    https://psychedelicshome.live/product/buy-be-yourself/
                    https://psychedelicshome.live/product/buy-infinite-rx-shroom-infused/
                    https://psychedelicshome.live/product/buy-creeper-magic-mushrooms/
                    https://psychedelicshome.live/product/buy-penis-envy-microdose-capsules/
                    https://psychedelicshome.live/product/buy-batman-20mg-2-cb/
                    https://psychedelicshome.live/product/buy-the-best-golden-ticket/


                    https://psychedelicshome.live/product/buy-pink-kenzo-tiger-230mg/
                    https://psychedelicshome.live/product/buy-red-star-lsd-microdots/
                    https://psychedelicshome.live/product/buy-egg-rolls/
                    https://psychedelicshome.live/product/order150ug-homer-simpson-lsd25-tabs/
                    https://psychedelicshome.live/product/buy-4-mmc/
                    https://psychedelicshome.live/product/buy-ghb-pills/
                    https://psychedelicshome.live/product/buy-green-white-tictac/
                    https://psychedelicshome.live/product/mastermind/
                    https://psychedelicshome.live/product/psilocybe-cubensis-edibles-grape/
                    https://psychedelicshome.live/product/buy-best-2024-psilly2-chocolate-1g/

                    https://psychedelicshome.live/product/we-sell-2024-nn-dmt/
                    https://psychedelicshome.live/product/shop-the-best-dmt-vape-refills/
                    https://psychedelicshome.live/product/purchase-top-tier-5meo-dmt/
                    https://psychedelicshome.live/product/buy-2024-mdma-green-molly/
                    https://psychedelicshome.live/product/purchase-pink-molly-2024-a-grate/
                    https://psychedelicshome.live/product/order-2024-best-pure-molly/
                    https://psychedelicshome.live/product/is-2023-buy-blue-tesla-mdma/

                    https://t.me/Nextdoormarket

                    Joanne Dehnbostel

                    unread,
                    May 3, 2024, 11:13:55 AM5/3/24
                    to Health Evidence Knowledge Accelerator (HEvKA)


                     

                    There were no HEvKA Meetings on May 2 to allow for participation in the GRADE working group meeting. 


                    Releases on the FEvIR Platform:

                    FEvIR®: My Ballot version 0.7.0 (May 02, 2024) is used to facilitate the Scientific Evidence Code System (SEVCO) development.

                          Release  0.7.0  (May 02, 2024)has been completely rewritten on how votes are stored in the database. This will result in much       faster load times when loading My Ballot and submitting votes.

                    FEvIR®: CodeSystem Builder/Viewer version 0.41.0 (May 02, 2024) creates and displays code system terms (concepts) in a CodeSystem Resource.

                          Release 0.41.0 (May 02, 2024) the user's votes for a term will load instantly. And for the admin of a codesystem, all votes will now       load instantly as well. Also for admin, if the term is open for voting the total vote count also has the number of Yes votes and No       votes that exist on or after the open for vote date and only count the user's latest vote. And all votes that apply to that count will          be highlighted in yellow in the list of votes. This will allow much quicker tallying.

                    FEvIR®: Bundle Builder/Viewer  version 0.2.0 (May 2, 2024)provides a human-friendly summary of a Bundle Resource.

                          Release 0.2.0 Bundle Builder/Viewer (May 2, 2024) adds a citation summary for how to cite the Bundle Resource, using an        identifier element.

                    Quote for Thought: “Travel is never a matter of money but of courage.” – Paulo Coelho

                    Joanne Dehnbostel

                    unread,
                    May 7, 2024, 7:37:01 AM5/7/24
                    to Health Evidence Knowledge Accelerator (HEvKA), Health Evidence Knowledge Accelerator (HEvKA) Weekly Update

                    Weekly update April 29-May 3
                    Project Coordination Updates:
                    7 of this week’s meetings were cancelled to allow participation in and transportation to, the GRADE Working Group Meeting in Miami, Florida.
                    On Wednesday May 1, Representatives of the GRADE Ontology Working Group (Paul Whaley and Brian Alper) presented the GRADE Ontology Concept paper which was approved at the GRADE meeting with 96% of the vote! (80% was required for approval.)
                     
                    7 people (AK, BA, CE , JD, KS, KW, MA) from 2 countries (UK, USA) participated this week in up to 7 active working group meetings.
                    34 people (AK, AN, BA, CA-D, CE , CM, ER, GL, HK, HL, IK, JB, JD, JJ, JO, JT, JX, KR, KS, KW, LW, MA, MH, PW, RC, RL, SK, SL, SM, T-XL, TD, XC, XZ, YF) from 14 countries (Belgium, Brazil, Canada, Chile/Spain, China, Finland, Germany, Italy, Norway, Peru, Poland, Taiwan, UK, USA) participated in the month of April in up to 62 active working group meetings.
                    Our HL7 EBMonFHIR track kickoff for the May 2024 HL7 Connectathon took place during the Monday April 29, HL7 Learning Health Systems (LHS) Working Group Meeting at 3pm Eastern time.
                     
                    The Project Management Working Group created the proposed agenda for May 6-10:
                    Day/Time (Eastern)
                    Working Group
                    Agenda Items
                    Monday 8-9 am
                    Project Management
                    FHIR changes and EBMonFHIR Implementation Guide issues
                    Monday 9-10 am
                    Setting the Scientific Record on FHIR
                    SRDR+ to FEvIR Review
                    Monday 10-11 am
                    ClinicalResearchOnFHIR (a CDS EBMonFHIR sub-WG)
                    Create Evidence Resource examples
                    Monday 2-3 pm
                    Statistic Terminology
                    Review SEVCO terms (2 terms open for vote)
                    Tuesday 9 am-10 am
                    Measuring the Rate of Scientific Knowledge Transfer
                    Review Progress on Pilot Project and Software updates
                    Tuesday 2-3 pm
                    StatisticsOnFHIR (a CDS EBMonFHIR sub-WG)
                    Review Baseline Measure Report Authoring Tool progress
                    Tuesday 3-4 pm
                    Ontology Management
                    Review Objectives and Progress
                    Wednesday 8-9 am
                    Funding the Ecosystem Infrastructure
                    Review progress on product launch readiness checklist for making guidelines computable
                    Wednesday 9-10 am
                    Communications (Awareness, Scholarly Publications)
                    Global Evidence Summit,  Study Design Paper
                    Thursday 8-9 am
                    EBM Implementation Guide (a CDS EBMonFHIR sub-WG)
                    Review EBMonFHIR IG change requests
                    Thursday 9-10 am
                    Computable EBM Tools Development
                    Review progress with Converter tools
                    Friday 9-10 am
                    Risk of Bias Terminology
                    Review SEVCO terms (1 term open for vote)
                    Friday 10-11 am
                    GRADE Ontology
                    Review GRADE terms (2 terms open for vote)
                    Friday 12-1 pm
                    Project Management
                    Create Agenda for the Upcoming Week
                     
                     
                    On Monday April 29, the Project Management Working Group discussed improvements to the MyBallot software to make it faster, more stable, and more efficient. 
                     
                    The Setting the Scientific Record on FHIR Working Group discussed creating examples of Evidence Resources as requested in this Jira ticket 
                    FHIR-45094 - Add comment to Evidence.relatedArtifact (and example) to explain how to express the source of extracted evidence.  
                    RESOLVED - CHANGE REQUIRED
                     
                    We proceeded to create one example using quoted text. The original article can be found here. https://fevir.net/resources/Citation/238142 and  the Evidence Resource created can be found here https://fevir.net/resources/Evidence/238143
                     
                    HL7 Standards Development Updates:
                     
                    On Monday April 29, the ClinicalResearchOnFHIR Working Group (a CDS EBMonFHIR sub-WG) discussed the change of focus from CQL development to ClinicalResearchOnFHIR during this meeting time, and looked at the CDISC website in anticipation of  CDISC to FHIR mapping.  https://www.cdisc.org/standards/real-world-data/fhir-cdisc-joint-mapping-implementation-guide-v1-0
                     
                    On Tuesday April 30, the StatisticsOnFHIR Working Group (a CDS EBMonFHIR sub-WG) demonstrated the BaselineMeasureReport Viewing Tool and discussed potential improvements.
                    On Wednesday May 1, the HL7 Clinical Decision Support Working Group met and voted to approve FHIR tracker https://jira.hl7.org/browse/FHIR-45293, asked for clarification on  https://jira.hl7.org/browse/FHIR-45394
                    and determined that there are questions of "ownership" of the M11 spec https://jira.hl7.org/browse/FHIR-45357.


                     
                     
                    SEVCO Updates:
                     
                    On Monday April 29 the Statistic Terminology Working Group found the following results from last week's voting:
                    "quantile" passes with 6 votes ,"percentile" passes with 6 votes, "decile" passes with 6 votes, "quartile" passes with 6 votes, we added hyperlinks in the comment for application to other terms
                    A suggestion to have a term for "second quartile", "first quartile" passes with 6 votes, "third quartile" passes with 6 votes, "area under the value-by-time curve" passes with 7 votes, and the term is now renamed to "area under the value-time curve" and "area under the value-by-time curve" added to alternative terms, "chi-square test of homogeneity" didn't pass with 4-2 changed the comment for application, "measure for heterogeneity" previously passed and we changed comment for application and re-opened it for vote.
                    There are currently 2 statistic terms open for voting:
                    Term
                    Definition
                    Alternative Terms
                    (if any)
                    Comment for application
                    (if any)
                    Vote
                    Comment
                    A statistic that represents the variation or spread among values in the set of estimates across studies.
                    • measure of statistical heterogeneity
                    There are several types of heterogeneity (or diversity) which are important factors which determine whether or not evidence should be pooled. Qualitative descriptors of explainable sources of heterogeneity include clinical heterogeneity and methodological heterogeneity. Clinical heterogeneity may refer to variations in the population, intervention, comparator, or outcome. Methodological heterogeneity may refer to variations in study design. Statistical heterogeneity, which is a quantitative measure of heterogeneity, whether explained or not, is described here.

                    A measure of dispersion is defined as a statistic that represents the variation or spread among data values in a dataset or data distribution. In the context of a meta-analysis, a measure of heterogeneity is a measure of dispersion in which the dataset is the set of estimates across studies.
                    Vote Comment
                    A measure of heterogeneity, based on the chi-square statistic, for reporting an analytic finding regarding whether two or more multinomial distributions are equal, accounting for chance variability.
                    • chi-square test of homogeneity
                    • chi2 heterogeneity statistic
                    A measure of heterogeneity is defined as a statistic that represents the variation or spread among values in the set of estimates across studies.
                    Chi square for homogeneity assesses whether observed differences in results are compatible with chance alone.
                     
                    On Friday May 3, the Risk of Bias Terminology Working Group found that of the 3 terms open for vote, language bias and geography bias received enough votes to be added to the code system. Publication bias had one negative vote with a request to change the comment for application. This was changed based on the comment and the term was reopened for vote. The group also discussed changes to the MyBallot software that make it much easier for administrators to tally and keep track of the votes.
                    Term
                    Definition
                    Alternative Terms
                    (if any)
                    Comment for application
                    (if any)
                    A bias in search strategy or study eligibility criteria that results from restrictions regarding the language of the study report.
                    Limiting the study reports included in a systematic review by language may result in an incomplete view of the truly available evidence.

                    The terms search strategy limits for study report characteristics not appropriate and study eligibility criteria limits for study report characteristics not appropriate are used to describe types of study selection bias that results from restrictions regarding the study report, distinguishing different steps in the search and selection process.
                    A bias in search strategy or study eligibility criteria that results from restrictions regarding the geographic origin of the research.
                    Limiting the study reports included in a systematic review by country of origin may result in an incomplete view of the truly available evidence.

                    The geographic origin of the research may refer to the location of the research participants, the investigators, or their associated organizations.

                    The terms search strategy limits for study report characteristics not appropriate , study eligibility criteria limits for study characteristics not appropriate and study eligibility criteria limits for study report characteristics not appropriate are used to describe types of study selection bias that results from restrictions regarding the study report, distinguishing different steps in the search and selection process.
                    re-opened for voting
                    A study selection bias in which the publicly available studies are not representative of all conducted studies.
                    • study selection bias due to selective reporting bias
                    • study selection bias due to non-reporting bias
                    • study selection bias due to reporting bias
                    Publication bias arises from the failure to identify all studies that have been conducted, either published (i.e., publicly available) or unpublished. The term 'studies' means evidence or research results in any form where such studies would meet the study eligibility criteria without consideration of criteria regarding the form of publication. The phrase 'publicly available studies' means the studies are available to the broad academic community and the public through established distribution channels in any form, including forms with restricted access. Established distribution channels include peer-reviewed journals, books, conference proceedings, dissertations, reports by governmental or research organizations, preprints, and study registries.

                    Publication bias often leads to an overestimate in the effect in favor of the study hypothesis, because studies with statistically significant positive results are more likely to be publicly available.

                    The terms reporting bias and selective reporting bias are used to describe biases in study reports, i.e., reporting bias. To avoid confusion between biases in study reports and biases in study selection, when either 'reporting bias' or 'selective reporting bias' are used as alternative for 'publication bias', the term is appended to 'study selection bias due to' as shown below:
                    • study selection bias due to reporting bias
                    • study selection bias due to selective reporting bias
                     
                     
                    Knowledge Ecosystem Liaison Updates:

                    On Tuesday April 30, the Ontology Management Working Group discussed converting the vote management from using FHIR Resources to handle each vote to an internal system .
                    There is no need to use FHIR for the vote data because the vote data is not exchanged, and this should greatly improve performance and reduce complexity.




                     


                    Releases on the FEvIR Platform:
                    The Fast Evidence Interoperability Resources (FEvIR) Platform is available for use now, but is “pre-release”.  The current version is 0.218.0 (April 27, 2024). Viewing resources is open without login.  Signing in is free and required to create content (which can then only be edited by the person who created the content).
                          Release 0.218.0 (April 27, 2024) adjusts Profiles of Composition Resources on new Resource creation and editing of the Narrative datatype to use an html <div> wrapper, and prevents a crash when displaying markdown elements without proper markdown format.
                     
                    FEvIR® API version 0.8.0 (May 3, 2024) supports GET and POST requests to retrieve Resources from the FEvIR Platform, retrieve concept JSON from CodeSystem Resources on the FEvIR Platform, or convert a ClinicalTrials.gov record to a FHIR Bundle Resource.
                    Release 0.8.0 (May 3, 2024) now allows fhirEntry to be a JSON object in addition to a string.
                     
                    FEvIR®: My Ballot version 0.7.0 (May 02, 2024) is used to facilitate the Scientific Evidence Code System (SEVCO) development.
                          Release  0.7.0  (May 02, 2024)has been completely rewritten on how votes are stored in the database. This will result in much faster load times when loading My Ballot and submitting votes.
                     
                    The FEvIR®: RAte of Dissemination Assessment Research (RADAR) Tool Version 0.14.0 (May 3, 2024) facilitates research conduct and project management for an investigation to measure the rate of scientific knowledge transfer.
                    Release 0.14.0 (May 3, 2024) in the Rate Articles table each article now has a link out to a PubMed query. Also an author’s last name no longer sometimes have a comma before the year in the article title list.
                     
                    FEvIR®: Bundle Builder/Viewer  version 0.2.0 (May 2, 2024)provides a human-friendly summary of a Bundle Resource.
                          Release 0.2.0 Bundle Builder/Viewer (May 2, 2024) adds a citation summary for how to cite the Bundle Resource, using an identifier element.
                     
                    FEvIR®: CodeSystem Builder/Viewer version 0.42.0 (May 03, 2024) creates and displays code system terms (concepts) in a CodeSystem Resource.
                          Release 0.41.0 (May 02, 2024) the user's votes for a term will load instantly. And for the admin of a codesystem, all votes will now       load instantly as well. Also for admin, if the term is open for voting the total vote count also has the number of Yes votes and No       votes that exist on or after the open for vote date and only count the user's latest vote. And all votes that apply to that count will          be highlighted in yellow in the list of votes. This will allow much quicker tallying.
                    Release 0.42.0 (May 3, 2024) for CodeSystem administrators the vote count updates when you switch terms and now the vote tally also includes a count for duplicate votes and a vote count from before the open for voting date. The vote summary and list of voters are now automatically generated.
                     
                    The FEvIR®: Composition Builder/Viewer version 0.17.0 (April 27, 2024) creates and displays a Composition Resource.
                          Release 0.17.0 (April 27, 2024) is revised to include an html <div> wrapper in the Narrative datatype.
                     
                    Computable Publishing®: ClinicalTrials.gov-to-FEvIR Converter version 4.13.0 (April 27, 2024)  converts ClinicalTrials.gov JSON to FEvIR Resources in FHIR JSON.
                          Release 4.13.0 (April 27, 2024) includes an html <div> wrapper in the Narrative datatype in generated Composition Resources.
                          Release 0.22.0 (April 27, 2024) is revised to include an html <div> wrapper in the Narrative datatype.
                    Computable Publishing® GRADEpro-to-FEvIR Converter  Version 0.4.0 (April 27, 2024) converts structured data from GRADEpro to FHIR JSON. 
                          Release 0.4.0 (April 27, 2024) includes an html <div> wrapper in the Narrative datatype in generated Composition Resources.                        Composition author is a reference instead of a ContactDetail, so the author string needs to be in the “display” field and not “name.”
                     
                    Computable Publishing®: MAGIC-to-FEvIR Converter version 0.19.0 (April 27, 2024) converts data from a MAGICapp JSON file (demo files available) to FEvIR Resources in FHIR JSON.
                          Release 0.19.0 (April 27, 2024) includes an html <div> wrapper in the Narrative datatype in generated Composition Resources. Prevents a crash when certain elements are missing in the MAGICapp JSON. Composition author is a reference instead of a ContactDetail, so the       author string needs to be in the “display” field and not “name.”
                     
                    Computable Publishing®: Baseline Measure Report Authoring Tool creates a Composition Resource with a BaselineMeasureReport Profile. The current version is 0.3.0 (April 27, 2024)
                          Release 0.3.0 (April 27, 2024) is revised to include an html <div> wrapper in the Narrative datatype.
                     
                    The Computable Publishing®: Comparative Evidence Report Authoring Tool version 0.22.0 (April 27, 2024) creates and displays a Composition Resource with a ComparativeEvidenceReport Profile.
                     
                    Computable Publishing®: Guideline Authoring Tool version 0.7.0 (April 27, 2024) creates a Composition Resource with a Guideline Profile
                          Release 0.7.0 (April 27, 2024) is revised to include an html <div> wrapper in the Narrative datatype.
                    Computable Publishing®: M11 Report Authoring Tool version 0.5.0 (April 27, 2024) creates and displays a Composition Resource with an M11Report Profile
                          Release 0.5.0 (April 27, 2024) is revised to match changes in the expected M11 Template and to include an html <div> wrapper in the       Narrative datatype.
                    The Computable Publishing®: Recommendation Authoring Tool version 0.15.0 (April 27, 2024)creates a Composition Resource with a Recommendation Profile and the associated Resources for a structured representation of a recommendation.
                          Release  0.15.0 (April 27, 2024) is revised to include an html <div> wrapper in the Narrative datatype.
                     
                    Computable Publishing®: Summary Of Findings Authoring Tool version 0.23.0 (April 27, 2024)creates and displays a SummaryOfFindings Profile of Composition Resource.
                          Release 0.23.0 (April 27, 2024) is revised to include an html <div> wrapper in the Narrative datatype.
                     
                    Quotes for Thought:
                     
                    “Monday, the start of a new week, with brand-new opportunities to enjoy all that life has to offer.” – Audrey Carlan, author
                    “What we fear of doing most is usually what we most need to do.” — Ralph Stripey Guy Emerson, philosopher
                    "Those who say it cannot be done should not interrupt those doing it.” — Chinese proverb
                    “Travel is never a matter of money but of courage.” – Paulo Coelho
                    “As I learned from growing up, you don’t mess with your grandmother.” — Prince William
                    It is loading more messages.
                    0 new messages