Delphi 2010 Handbook Pdf

2 views
Skip to first unread message

Sanny Olafeso

unread,
Jul 25, 2024, 5:35:22 AM7/25/24
to SKAT and MetaSKAT user group

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

delphi 2010 handbook pdf


Download File ===== https://tlniurl.com/2zNKxa



Background: The Literature is no report support material on Shared Decision-making applied to breast cancer screening that is intended for Spanish health professionals. The researcher created both a handbook and a guide for this topic using an adaption of the Three-talk model.

Objective: A Delphi method will be used to reach an agreement among experts on the contents and design of a manual and guide, designed by the research team, and to be used by health professionals in the application of SDM in breast cancer screening.

Design: A qualitative study. The content and design of the handbook and the guide was discussed by 20 experts. The Delphi techniques was in an online mode between July and October 2020 and researchers used Google forms in three rounds with open and closed questions. The criterion established for consensus was a coefficient of concordance (Cc) above 75, for questions using a Likert scale of 1-6-in which 1 meant 'completely disagree' and 6 'completely agree'-with a cut-off point equal to or higher than 4.

Results: Participants considered the Three-talk model suitable for the screening context. The handbook sections and level of detail were considered satisfactory (Cc=90). The summary provided by the clinical practice guide was considered necessary (Cc=75), as it was the self-assessment tool for professionals (Cc=85). Content was added: addressing the limitations of the SDM model; extending the number of sample dialogues for health professionals; providing supplementary resources on using Patient Decisions aids and adding references on communication skills.

Conclusions and applications: The first handbook and clinical practice guide providing unique SDM support material for health professionals have been developed. The handbook and guide are useful and innovative as supporting material for health professionals, but training strategies for SDM and a piloting plan for the use of materials are requested, in order to facilitate its implementation.

The selection of appropriate outcomes is crucial when designing clinical trials in order to compare the effects of different interventions directly. For the findings to influence policy and practice, the outcomes need to be relevant and important to key stakeholders including patients and the public, health care professionals and others making decisions about health care. It is now widely acknowledged that insufficient attention has been paid to the choice of outcomes measured in clinical trials. Researchers are increasingly addressing this issue through the development and use of a core outcome set, an agreed standardised collection of outcomes which should be measured and reported, as a minimum, in all trials for a specific clinical area.

Accumulating work in this area has identified the need for guidance on the development, implementation, evaluation and updating of core outcome sets. This Handbook, developed by the COMET Initiative, brings together current thinking and methodological research regarding those issues. We recommend a four-step process to develop a core outcome set. The aim is to update the contents of the Handbook as further research is identified.

Clinical trials will usually include multiple outcomes of interest, and the main outcomes are usually those essential for decision-making. Some outcomes will be of more interest than others. The primary outcome is typically chosen to be the one of greatest therapeutic importance [6] to relevant stakeholders, such as patients and clinicians, is an integral component of the research question under investigation and is usually the one used in the sample size calculation [7]. Sometimes, researchers propose more than one primary outcome if they are thought to be of equal therapeutic importance and relevance to the research question. This can also be useful if it is unclear which single primary outcome will best answer the question. Secondary outcomes evaluate other beneficial or harmful effects of secondary importance or are useful for explaining additional effects of the intervention [8]. Secondary outcomes may also be exploratory in nature. Harmful effects should always be viewed as important regardless of their primary or secondary outcome label [7]. In addition to assessing relative therapeutic benefit and safety, decision-makers are usually also interested in the acceptability and cost-effectiveness of the interventions under study.

Clinical trials seek to evaluate whether an intervention is effective and safe by comparing the effects of interventions on outcomes, and by measuring differences in outcomes between groups. Clinical decisions about the care of individual patients are made on the basis of these outcomes, so clearly the selection of outcomes to be measured and reported in trials is critical. The chosen outcomes need to be relevant to health service users and others involved in making decisions and choices about health care. However, a lack of adequate attention to the choice of outcomes in clinical trials has led to avoidable waste in both the production and reporting of research, and the outcomes included in research have not always been those that patients regard as most important or relevant [13].

Alongside this inconsistency in the measurement of outcomes, outcome-reporting bias adds further to the problems faced by users of research who wish to make well-informed decisions about health care. Outcome-reporting bias has been defined as the selection of a subset of the original recorded outcomes, on the basis of the results, for inclusion in the published reports of trials and other research [18]. Empirical evidence shows that outcomes that are statistically significant are more likely to be fully reported [19]. Selective reporting of outcomes means that fully informed decisions cannot be made about the care of patients, resource allocation, research priorities and study design. This can lead to the use of ineffective or even harmful interventions, and to the waste of health care resources that are already limited [20].

The first step in the development of a COS is typically to identify what to measure [22]. Once agreement has been reached regarding what should be measured, how the outcomes included in the core set should be defined and measured is then determined.

The use of COS will lead to higher-quality trials, and make it easier for the results of trials to be compared, contrasted and combined as appropriate, thereby reducing waste in research [22]. This approach would reduce heterogeneity between trials because all trials would measure and report the agreed important outcomes, lead to research that is more likely to have measured relevant outcomes due to the involvement of relevant stakeholders in the process of determining what is core, and be of potential value to use in clinical audit. Importantly, it would enhance the value of evidence synthesis by reducing the risk of outcome-reporting bias and ensuring that all trials contribute usable information.

One of the earliest examples of an attempt to standardise outcomes is an initiative by the World Health Organisation in the 1970s, relating to cancer trials [24]. More than 30 representatives from groups doing trials in cancer came together, the result of which was a WHO Handbook of guidelines recommending the minimum requirements for data collection in cancer trials. The most notable work to date relating to outcome standardisation since has been conducted by the OMERACT (Outcome Measures in Rheumatology) collaboration [25] which advocates the use of COS, designed using consensus techniques, in clinical trials in rheumatology. This, and other relevant initiatives, is described below.

Since OMERACT there have been other examples of similar COS initiatives to develop recommendations about the outcomes that should be measured in clinical trials. One example is the Initiative on Methods, Measurement, and Pain Assessment in Clinical Trials (IMMPACT) [32], whose aim is to develop consensus reviews and recommendations for improving the design, execution and interpretation of clinical trials of treatments for pain. The first IMMPACT meeting was held in November 2002, and there have been a total of 17 consensus meetings on clinical trials of treatments for acute and chronic pain in adults and children. Another exemplar is the Harmonising Outcome Measures for Eczema (HOME) Initiative [33]. This is an international group working to develop core outcomes to include in all eczema trials.

The COMET (Core Outcome Measures in Effectiveness Trials) Initiative brings together people interested in the development and application of COS [34]. COMET aims to collate and stimulate relevant resources, both applied and methodological, to facilitate exchange of ideas and information, and to foster methodological research in this area. As previously described [35], specific objectives are to:

The COMET Initiative was launched at a meeting in Liverpool in January 2010, funded by the MRC North West Hub for Trials Methodology (NWHTMR). More than 110 people attended, with representatives from trialists, systematic reviewers, health service users, clinical teams, journal editors, trial funders, policy-makers, trials registries and regulators. The feedback was uniformly supportive, indicating a strong consensus that the time was right for such an initiative. The meeting was followed by a second meeting in Bristol in July 2011 which reinforced the need for COS across a wide range of areas of health and the role of COMET in helping to coordinate information about these. COMET has gone on to have subsequent successful international meetings in Manchester (2013), Rome (2014) [36] and Calgary (2015) [37] to affirm this.

4a15465005
Reply all
Reply to author
Forward
0 new messages