Download Three Tools

1 view
Skip to first unread message

Breanne Meisenheimer

unread,
Jan 10, 2024, 12:21:27 PM1/10/24
to ebcesartaps

The quality of Internet information needs to be evaluated and several tools exist for this purpose. However, none have demonstrated reliability and validity. This study tested the internal consistency and validity of the information quality tool (IQT), quality scale (QS) and DISCERN using 89 web sites discussing smoking cessation. The inter-rater reliability of the tools was established by exploring the agreement between two independent raters for 22 (25%) of the sites. The IQT and DISCERN possessed satisfactory internal consistency (as measured by Cronbach's alpha). The IQT, QS and DISCERN showed satisfactory inter-rater reliability (as measured by kappa and intraclass correlations). The IQT, QS and DISCERN correlated positively with each other, supporting the convergent validity of the tools. This study provides some evidence for the reliability and validity of the IQT, QS and DISCERN, although this needs testing in further research with different types of Internet information and larger sample sizes.

download three tools


Download https://t.co/LB2maYfp2C



Burnout in healthcare workers (HCWs) is costly, consequential, and alarmingly high. Many HCWs report not having enough time or opportunities to engage in self-care. Brief, engaging, evidence-based tools have unique potential to alleviate burnout and improve well-being. Three prospective cohort studies tested the efficacy of web-based interventions: Three Good Things (n = 275), Gratitude Letter (n = 123), and the Looking Forward Tool (n = 123). Metrics were emotional exhaustion, depression, subjective happiness, work-life balance, emotional thriving, and emotional recovery. Across all studies, participants reported improvements in all metrics between baseline and post assessments, with two exceptions in study 1 (emotional thriving and happiness at 6 and 12-month post) and study 3 (optimism and emotional thriving at day 7). The Three Good Things, Gratitude Letter, and Looking Forward tools appear promising interventions for the issue of HCW burnout.

I am seeking for advices and opinions those badass enough to write their own shaderMaterials might want to share in regards to what dev environment should one adopt to create shaders specifically for three.js.

Wow! This is actually more than compelling! Actually ever since I first heard of the web workers I aimed at using them for my three.js constructions, yet I never knew quite where to start or what were their limits.

We explored the performance of three machine learning tools designed to facilitate title and abstract screening in systematic reviews (SRs) when used to (a) eliminate irrelevant records (automated simulation) and (b) complement the work of a single reviewer (semi-automated simulation). We evaluated user experiences for each tool.

We subjected three SRs to two retrospective screening simulations. In each tool (Abstrackr, DistillerSR, RobotAnalyst), we screened a 200-record training set and downloaded the predicted relevance of the remaining records. We calculated the proportion missed and workload and time savings compared to dual independent screening. To test user experiences, eight research staff tried each tool and completed a survey.

In light of known barriers to ML tool adoption [9,10,11,12], we investigated the relative advantages and risks of using ML tools to automate or semi-automate title and abstract screening. For three SRs, we compared how three ML tools performed when used in the context of (a) single reviewer screening to eliminate irrelevant records and (b) dual independent screening to complement the work of one of the human reviewers. We also aimed to compare user experiences across the tools.

Although many ML tools exist [13], we chose Abstrackr, DistillerSR, and RobotAnalyst because their development is well-documented [14,15,16], and at least for Abstrackr and RobotAnalyst, real-world performance has been evaluated [17,18,19]. We also chose the tools for practical reasons. All three allow the user to download the relevance predictions after screening a training set. Both Abstrackr and RobotAnalyst are freely available, and although DistillerSR is a pay-for-use software, our center maintains a user account.

In February 2019, we approached a convenient sample of 11 research staff at our center to participate in the user experience testing. These staff were experienced in producing SRs (e.g., research assistants, project coordinators, research associates), but had no or very little experience with ML tools for screening. We allowed invited participants 1 month to undertake the study, which entailed completing a screening exercise in each tool and a user experience survey. Participation was voluntary and completion of the survey implied consent. We received ethical approval for the user experience testing from the University of Alberta Research Ethics Board (Pro00087862).

For the screening exercise, we selected a SR with relatively straightforward eligibility criteria that was underway at our center (PROSPERO #CRD42017077622). We wanted participants to focus on their experience in each tool and did not want complex screening criteria to be a distraction. To reduce the risk of response bias, we used the random numbers generator in Excel to randomize the order in which each participant tested the three tools.

The survey (Additional file 2), hosted in REDCap (Research Electronic Data Capture) [24], asked participants to complete the System Usability Scale (SUS) [25] for each tool. The SUS is a 10-item questionnaire that assesses subjective usability using a Likert-like scale [25]. The survey also asked participants to elaborate on their experiences with each tool, rank the tools in order of preference, and describe the features that supported or detracted from their usability.

We exported the quantitative survey data from REDCap to Excel for analysis and the qualitative survey data to Word (v. 2016, Microsoft Corporation, Redmond, Washington). For each participant, we calculated the overall usability score for each tool as recommended by Brooke [25]. We calculated the median and interquartile range of scores for each tool and categorized their usability as recommended by Bangor et al. [27]: not acceptable (0 to 50), marginal (50 to 70), and acceptable (70 to 100). For the ranking of tools by preference, we calculated counts and percentages.

Another important contributor to the adoption of ML tools for screening will be their usability and fit with standard SR workflows [9]. The usability of the three tools varied considerably and relied upon multiple properties. Although usability will be of little concern once title and abstract screening is fully automated, the path toward that ideal begins with the acceptance and greater adoption of semi-automated approaches. Multiple experienced reviewers within our sample were unable to download the predictions from a number of the tools. Even when they were downloaded, the predictions were often in an impractical or unusable format. So long as reviewers cannot leverage the tools as intended, adoption is unrealistic. Greater attention to usability may improve the appeal of ML-assisted screening during early phases of adoption.

This is one of few studies to compare performance and user experiences across multiple ML tools for screening in SRs. Further, our study responds to a call from the International Collaboration for Automation of Systematic Reviews to trial and validate available tools [7] and addresses reported barriers to their adoption [9].

We thank Dr. Michelle Gates for piloting the usability survey and screening exercise and for suggesting revisions to the manuscript draft. We thank Dr. Meghan Sebastianski for reviewing the qualitative analysis. We also thank our colleagues for taking the time to test the ML tools and provide feedback and the peer reviewers for providing constructive suggestions for improvement on the manuscript draft.

We believe this AI-enabled scenario represents the likely future of QI work. Over the course of a 90-day Innovation project, an Institute for Healthcare Improvement (IHI) research team concluded that all the technology to perform these activities already exists. The generative AI products that have evolved since the public debut of ChatGPT in November 2022 might be a game-changer for health care quality teams, but organizations need to carefully consider the costs and benefits of their use. We arrived at three key concepts during the innovation cycle that we share below.

While our research suggested most practitioners are only in the early phases of using AI tools for QI work, these technologies will likely dramatically improve over time and change how we do QI in the coming years. We can already use AI tools to create most QI materials. Our research found that large language models (like ChatGPT) can help savvy users build run and control charts, identify change ideas, craft cause-and-effect diagrams, and draft driver diagrams.

AI tools can also help teach QI concepts. For example, it can offer explanations of complex ideas tailored to a specific audience. AI tools can draft lesson plans, course outlines, icebreakers, and much more. Some quality specialists have started to use AI tools to visualize data and tackle basic QI questions, such as generating a preliminary set of change ideas or producing a plain-language explanation of a QI concept.

Your imagination turns you from just a person to any character you could conceive of. Your imagination can create worlds. Your imagination takes your body and voice and combines (or removes one of them!) to create different kinds of theatre. Take away the body and create a soundscape. Take away the voice and create a mime or tableau scene. Or use all three tools to create something entirely different and fascinating!

The sprint introduced a set of strategy tools designed to help the participants explore the problem framings proposed by the investors and identify actionable solutions. After testing these tools through the co-design sprint, D-Lab staff members Jona Repishti and Saida Benhayoune refined them into a practical toolkit for investors to accelerate their gender lens investing journey from intention to action:

f448fe82f3
Reply all
Reply to author
Forward
0 new messages