Rapid critical appraisal of an RCT Step 2

Step 2: How well was the study done?

In this step we will be using the mnemonic “RAMMbo” to work through a method of rapid critical appraisal for intervention studies using the article by Spink, M.J. et al. that we referred to earlier.

R = Recruitment
A = Allocation
M = Maintenance
M = Measurement (blinding, objective measures)

Click to reveal the answers below.

Recruitment

Questions to ask yourself Were the subjects’ representative of the target population?
Where do I find this information? Early in the METHODS section of the paper.
Limitations regarding recruitment are in the DISCUSSION (Strengths and limitations) section.
Answer The need for consent means that the subjects were probably not representative. The sample may have been biased towards those prepared to commit to participation for a 12 month period, but the characteristics of those who participated were well described in Table 1.
Questions to ask yourself Was the sample adequately powered?
Where do I find this information? The METHODS section (Trial design)
Answer 305 participants were recruited, 153 in the intervention group and 152 in the control group. Sample size calculations to achieve 80% power required 143 per group, so the study appears to be adequately powered. However, the calculation was based on a 60% rate of falls in the control group based on a previous study. The authors note that their trial may have been underpowered because fewer people than expected fell in the control group of their study - only 49% instead of 60%.

Allocation

Questions to ask yourself Was the randomisation method unbiased?
Where do I find this information? The METHODS section (randomisation)
Answer Yes: The randomisation was achieved through group allocation using an interactive voice response telephone service provided by the National Health and Medical Research Council (NHMRC) Clinical Trials Centre.
Questions to ask yourself Were the groups matched as closely as possible at the start of the trial?
Where do I find this information? Table 1 in the RESULTS section, p.4.
Answer There is a table in the paper (Table 1) that shows the groups were well matched apart from a difference in the rate of arthritis (osteo and rheumatoid) between the two groups.

Maintenance

Questions to ask yourself Was the comparable status of the study groups maintained through equal treatment and was there adequate follow up? Everything should stay the same for both groups, except for the introduction of the intervention?
Where do I find this information? The METHODS section (Trial design)
Answer Yes: The management was the same for both groups. The intervention was administered by the same podiatrist to all participants in the intervention group, thus increasing the likelihood of equal treatment. Each participant was tested by the same assessor at 6 months and 12 months, thus reducing the likelihood of measurement bias.
Questions to ask yourself Is information provided to show how and where subjects were 'lost' during the trial?
Where do I find this information? The METHODS section
Answer Yes: There is an explanatory flowchart provided that shows six lost to follow up from the intervention group and three from the control group.

Measurements - Blinding

Questions to ask yourself Were the outcomes measured with blinded subjects and assessors and/ or objective outcomes? This is necessary to reduce bias as far as possible?
Where do I find this information? The METHODS section (Trial design)
Further discussed in the DISCUSSION section (Strengths and limitations)
Answer The subjects were not blinded from the treatment because consent was required and they had to commit to participate in the intervention package. The assessors of the secondary outcomes were blinded to the treatment group each subject had been allocated to.

Measurements - Objective outcomes

Questions to ask yourself Were the outcomes measured objectively?
Where do I find this information? The METHODS section (Trial design)
Answer Yes: The three primary outcomes - mean number of falls per participant, number experiencing one or more falls, and number experiencing two or more falls - were measured objectively by means of falls calendars.