TY - JOUR
T1 - Method and reporting quality in health professions education research
T2 - A systematic review
AU - Cook, David A.
AU - Levinson, Anthony J.
AU - Garside, Sarah
PY - 2011/3
Y1 - 2011/3
N2 - Context Studies evaluating reporting quality in health professions education (HPE) research have demonstrated deficiencies, but none have used comprehensive reporting standards. Additionally, the relationship between study methods and effect size (ES) in HPE research is unknown. Objectives This review aimed to evaluate, in a sample of experimental studies of Internet-based instruction, the quality of reporting, the relationship between reporting and methodological quality, and associations between ES and study methods. Methods We conducted a systematic search of databases including MEDLINE, Scopus, CINAHL, EMBASE and ERIC, for articles published during 1990-2008. Studies (in any language) quantifying the effect of Internet-based instruction in HPE compared with no intervention or other instruction were included. Working independently and in duplicate, we coded reporting quality using the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) statement, and coded study methods using a modified Newcastle-Ottawa Scale (m-NOS), the Medical Education Research Study Quality Instrument (MERSQI), and the Best Evidence in Medical Education (BEME) global scale. Results For reporting quality, articles scored a mean±standard deviation (SD) of 51±25% of STROBE elements for the Introduction, 58±20% for the Methods, 50±18% for the Results and 41±26% for the Discussion sections. We found positive associations (all p<0.0001) between reporting quality and MERSQI (ρ=0.64), m-NOS (ρ=0.57) and BEME (ρ=0.58) scores. We explored associations between study methods and knowledge ES by subtracting each study's ES from the pooled ES for studies using that method and comparing these differences between subgroups. Effect sizes in single-group pretest/post-test studies differed from the pooled estimate more than ESs in two-group studies (p=0.013). No difference was found between other study methods (yes/no: representative sample, comparison group from same community, randomised, allocation concealed, participants blinded, assessor blinded, objective assessment, high follow-up). Conclusions Information is missing from all sections of reports of HPE experiments. Single-group pre-/post-test studies may overestimate ES compared with two-group designs. Other methodological variations did not bias study results in this sample.
AB - Context Studies evaluating reporting quality in health professions education (HPE) research have demonstrated deficiencies, but none have used comprehensive reporting standards. Additionally, the relationship between study methods and effect size (ES) in HPE research is unknown. Objectives This review aimed to evaluate, in a sample of experimental studies of Internet-based instruction, the quality of reporting, the relationship between reporting and methodological quality, and associations between ES and study methods. Methods We conducted a systematic search of databases including MEDLINE, Scopus, CINAHL, EMBASE and ERIC, for articles published during 1990-2008. Studies (in any language) quantifying the effect of Internet-based instruction in HPE compared with no intervention or other instruction were included. Working independently and in duplicate, we coded reporting quality using the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) statement, and coded study methods using a modified Newcastle-Ottawa Scale (m-NOS), the Medical Education Research Study Quality Instrument (MERSQI), and the Best Evidence in Medical Education (BEME) global scale. Results For reporting quality, articles scored a mean±standard deviation (SD) of 51±25% of STROBE elements for the Introduction, 58±20% for the Methods, 50±18% for the Results and 41±26% for the Discussion sections. We found positive associations (all p<0.0001) between reporting quality and MERSQI (ρ=0.64), m-NOS (ρ=0.57) and BEME (ρ=0.58) scores. We explored associations between study methods and knowledge ES by subtracting each study's ES from the pooled ES for studies using that method and comparing these differences between subgroups. Effect sizes in single-group pretest/post-test studies differed from the pooled estimate more than ESs in two-group studies (p=0.013). No difference was found between other study methods (yes/no: representative sample, comparison group from same community, randomised, allocation concealed, participants blinded, assessor blinded, objective assessment, high follow-up). Conclusions Information is missing from all sections of reports of HPE experiments. Single-group pre-/post-test studies may overestimate ES compared with two-group designs. Other methodological variations did not bias study results in this sample.
UR - http://www.scopus.com/inward/record.url?scp=79551662100&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=79551662100&partnerID=8YFLogxK
U2 - 10.1111/j.1365-2923.2010.03890.x
DO - 10.1111/j.1365-2923.2010.03890.x
M3 - Article
C2 - 21299598
AN - SCOPUS:79551662100
SN - 0308-0110
VL - 45
SP - 227
EP - 238
JO - British journal of medical education
JF - British journal of medical education
IS - 3
ER -