CASP Randomised Controlled Trial Standard Checklist:

  • Post category:Nursing
  • Reading time:13 mins read

Read research RCT article and critique research article. Make sure to answer all the consider questions on the RCT FORM! Do not just check off -yes or no or not sure. Give your reasoning for your answer.

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6447149/ (Links to an external site.)

https://casp-uk.b-cdn.net/wp-content/uploads/2020/10/CASP_RCT_Checklist_PDF.pdf

CASP Randomised Controlled Trial Standard Checklist:

11 questions to help you make sense of a randomised controlled trial (RCT)

Main issues for consideration: Several aspects need to be considered when appraising a randomised controlled trial:

 

Is the basic study design valid for a randomised controlled trial? (Section A)

Was the study methodologically sound? (Section B) What are the results? (Section C)

Will the results help locally? (Section D)

 

The 11 questions in the checklist are designed to help you think about these aspects systematically.

 

How to use this appraisal tool: The first three questions (Section A) are screening questions about the validity of the basic study design and can be answered quickly. If, in light of your responses to Section A, you think the study design is valid, continue to Section B to assess whether the study was methodologically sound and if it is worth continuing with the appraisal by answering the remaining questions in Sections C and D.

 

Record ‘Yes’, ‘No’ or ‘Can’t tell’ in response to the questions. Prompts below all but one of the questions highlight the issues it is important to consider. Record the reasons for your answers in the space provided. As CASP checklists were designed to be used as educational/teaching tools in a workshop setting, we do not recommend using a scoring system.

 

About CASP Checklists: The CASP RCT checklist was originally based on JAMA Users’ guides to the medical literature 1994 (adapted from Guyatt GH, Sackett DL and Cook DJ), and piloted with healthcare practitioners. This version has been updated taking into account the CONSORT 2010 guideline (http://www.consort-statement.org/consort-2010, accessed 16 September 2020).

 

Citation: CASP recommends using the Harvard style, i.e., Critical Appraisal Skills Programme (2021). CASP (insert name of checklist i.e. Randomised Controlled Trial) Checklist. [online] Available at: insert URL. Accessed: insert date accessed.

 

©CASP this work is licensed under the Creative Commons Attribution – Non-Commercial- Share A like. To view a copy of this licence, visit https://creativecommons.org/licenses/by-sa/4.0/

 

Critical Appraisal Skills Programme (CASP) www.casp-uk.net Part of OAP Ltd

 

 

Study and citation: Padilha et al. (2019).

 

1. Did the study address a clearly focused research question?

CONSIDER:  

·          Was the study designed to assess the outcomes of an intervention?

·          Is the research question ‘focused’ in terms of:

•    Population studied  

•    Intervention given

•    Comparator chosen

•    Outcomes measured?

Yes                        No                         Can’t tell

 

 

The study addressed a clearly focused research question. This is evident by the fact that the researchers designed the study to assess the effects of implementing a clinical virtual simulation on the levels of learning satisfaction, knowledge retention, self-efficacy, and clinical reasoning among nursing students. The research question is also focused in relation to the population studied, intervention, comparator, and the outcomes measured.

 

2. Was the assignment of participants to interventions randomised?

CONSIDER:  

•         How was randomisation carried out? Was the method appropriate?

•         Was randomisation sufficient to eliminate systematic bias?

•         Was the allocation sequence concealed from investigators and participants?

Yes                        No                         Can’t tell

 

 

 

The assignment of participants to the interventions was randomised through simple random sampling. The randomization was sufficient to eliminate systematic bias. In randomized controlled trials, randomization helps to eliminate bias and improve the rigor of the study (Hariton & Locascio, 2018). The allocation sequence was concealed from investigators and participants through anonymization of students.

 

 

3. Were all participants who entered the study accounted for at its conclusion?

CONSIDER:  

•         Were losses to follow-up and exclusions after randomisation accounted for?

•         Were participants analysed in the study groups to which they were randomised (intention-to-treat analysis)?

•         Was the study stopped early? If so, what was the reason?

Yes                        No                         Can’t tell

 

 

 

All participants who entered the study were accounted for at its conclusion. The researchers have accounted for losses to follow-up and exclusions after randomization. They have also analyzed the participants separately in the intervention and control groups.

 

4. ·        Were the participants ‘blind’ to

intervention they were given?

·        Were the investigators ‘blind’ to the intervention they were giving to participants?

·        Were the people assessing/analysing outcome/s ‘blinded’?

Yes                        No                         Can’t tell

 

 

 

Yes                        No                         Can’t tell

 

 

Yes                          No                         Can’t tell

 

 

 

 

5. Were the study groups similar at the start of the randomised controlled trial?

CONSIDER:  

·       Were the baseline characteristics of each study group (e.g. age, sex, socio-economic group) clearly set out?  

·       Were there any differences between the study groups that could affect the outcome/s?

Yes                        No                         Can’t tell

 

 

The intervention and the control groups were similar at the start of the randomized controlled trial. The reason is that the baseline characteristics of each study group were clearly set out.

 

6. Apart from the experimental intervention, did each study group receive the same level of care (that is, were they treated equally)?

 

CONSIDER:  

·      Was there a clearly defined study protocol?

·     If any additional interventions were given (e.g. tests or treatments), were they similar between the study groups?

·     Were the follow-up intervals the same for each study group?

Yes                        No                         Can’t tell

 

 

Each study group received the same level of care apart from the treatment group. Precisely, the follow-up intervals were the same for each study group.

 

 

 

 

7. Were the effects of intervention reported comprehensively?

 

CONSIDER:  

•        Was a power calculation undertaken?

•        What outcomes were measured, and were they clearly specified?

•         How were the results expressed? For binary outcomes, were relative and absolute effects reported?

•         Were the results reported for each outcome in each study group at each follow-up interval?

•        Was there any missing or incomplete data?

•        Was there differential drop-out between the study groups that could affect the results?

•         Were potential sources of bias identified?

•         Which statistical tests were used?

•         Were p values reported?

Yes                        No                         Can’t tell

 

 

 

The researchers comprehensively reported the effects of the intervention. The power calculation was undertaken at a statistical power of 0.80. The specific outcomes measures in the study include the levels of learning satisfaction, knowledge retention, self-efficacy, and clinical reasoning among nursing students. The results were clearly expressed and reported for each study group after the implementation of the intervention and during follow-up. The data was complete and the researchers have not reported any differential drop-out between the study groups that could affect the results. Potential sources of bias were not reported. A number of statistical tests were performed and they include the Kolmogorov-Smirnov test with the Lilliefors correction, unpaired t-test, the Welch correction, and multivariate analysis of variance (MANOVA). The statistical significance was tested at P<.05.

8. Was the precision of the estimate of the intervention or treatment effect reported?

 

CONSIDER:  

•         Were confidence intervals (CIs) reported?

Yes                        No                         Can’t tell

 

 

The precision of the estimate of the intervention or treatment effect was not reported. The researchers did not report confidence intervals (CIs).

9. Do the benefits of the experimental intervention outweigh the harms and costs?

 

CONSIDER:  

·         What was the size of the intervention or treatment effect? 

·         Were harms or unintended effects reported for each study group?

·         Was a cost-effectiveness analysis undertaken? (Cost-effectiveness analysis allows a comparison to be made between different interventions used in the care of the same condition or problem.)

Yes                        No                         Can’t tell

 

 

The benefits of the experimental intervention outweigh the harms. However, it is not clear whether they outweigh the costs or not. The researchers have used an effect size of d=0.80 which signifies a large effect of the intervention on the studied sample size (Bujang, 2021).

 

 

 

 

 

10. Can the results be applied to your local population/in your context?

 

CONSIDER:

•         Are the study participants similar to the people in your care?  

•         Would any differences between your population and the study participants alter the outcomes reported in the study?

•         Are the outcomes important to your population?  

•         Are there any outcomes you would have wanted information on that have not been studied or reported?  

•         Are there any limitations of the study that would affect your decision?

Yes

 

 

The study res

No

 

 

results can be applied in my context. The reason is that the study participants are nursing students and they resemble nurses who are involved in patient care in my practice setting. Any difference between my population and study participants might alter the outcomes reported in the study. The outcomes are clear and are important to my population. The reported limitation would not affect my decision regarding the study outcomes.

 

 

 

 

 

 

 

Can’t tell
11. Would the experimental intervention provide greater value to the people in your care than any of the existing interventions?

 

CONSIDER:  

·         What resources are needed to introduce this intervention taking into account time, finances, and skills development or training needs?

·         Are you able to disinvest resources in one or more existing interventions in order to be able to re-invest in the new intervention?

Yes

 

 

 

The experimental would indeed provide greater value to the people in my care. The organization needs to set aside finances, time, and trainers in order to introduce the intervention into the organization. However, this does not call for the need to disinvest resources in other existing interventions.

No Can’t tell
 
APPRAISAL SUMMARY: Record key points from your critical appraisal in this box. What is your conclusion about the paper? Would you use it to change your practice or to recommend changes to care/interventions used by your organisation? Could you judiciously implement this intervention without delay?

 

The key point from the critical appraisal is that RTCs have unique features, particularly randomization and the use of control groups, which distinguish them from other types of studies. The conclusion that can be drawn from the paper is that clinical virtual simulation is effective in improving knowledge retention and initial clinical reasoning over time. It also improves student satisfaction with learning. However, clinical virtual simulation does not influence the perception of general efficiency. Based on these findings, I would use clinical virtual simulation to change practice or recommend changes to care used by my organization. I can judiciously implement the intervention without delay due to its proven effectiveness in improving knowledge retention, enhancing initial clinical reasoning over time, and improving student satisfaction with learning.

 

References

Bujang M. A. (2021). A step-by-step process on sample size determination for medical research. The Malaysian Journal of Medical Sciences : MJMS28(2), 15–27. https://doi.org/10.21315/mjms2021.28.2.2

Hariton, E., & Locascio, J. J. (2018). Randomised controlled trials – the gold standard for effectiveness research: Study design: randomised controlled trials. BJOG : An International Journal of Obstetrics and Gynaecology125(13), 1716. https://doi.org/10.1111/1471-0528.15199

Padilha, J. M., Machado, P. P., Ribeiro, A., Ramos, J., & Costa, P. (2019). Clinical virtual simulation in nursing education: randomized controlled trial. Journal of Medical Internet Research21(3), e11529. https://doi.org/10.2196/11529