• Evidence-based decision-making in infectious diseases epidemiology, prevention and control: matching research questions to study designs and quality appraisal tools.

      Harder, Thomas; Takla, Anja; Rehfuess, Eva; Sánchez-Vivar, Alex; Matysiak-Klose, Dorothea; Eckmanns, Tim; Krause, Gerard; de Carvalho Gomes, Helena; Jansen, Andreas; Ellis, Simon; et al. (2014)
      The Project on a Framework for Rating Evidence in Public Health (PRECEPT) was initiated and is being funded by the European Centre for Disease Prevention and Control (ECDC) to define a methodology for evaluating and grading evidence and strength of recommendations in the field of public health, with emphasis on infectious disease epidemiology, prevention and control. One of the first steps was to review existing quality appraisal tools (QATs) for individual research studies of various designs relevant to this area, using a question-based approach.
    • Factors associated with attrition in a longitudinal online study: results from the HaBIDS panel.

      Rübsamen, Nicole; Akmatov, Manas K; Castell, Stefanie; Karch, André; Mikolajczyk, Rafael T; Hemholtz-Zentrum für Infektionsforschung GmbH, Inhoffenstr. 7, 38124 Braunschweig, Germany. (2017-08-31)
      Knowing about predictors of attrition in a panel is important to initiate early measures against loss of participants. We investigated attrition in both early and late phase of an online panel with special focus on preferences regarding mode of participation.
    • Measuring inter-rater reliability for nominal data - which coefficients and confidence intervals are appropriate?

      Zapf, Antonia; Castell, Stefanie; Morawietz, Lars; Karch, André; Helmholtz Centre for infection research, Inhoffenstr. 7, 38124 Braunschweig, Germany. (2016)
      Reliability of measurements is a prerequisite of medical research. For nominal data, Fleiss' kappa (in the following labelled as Fleiss' K) and Krippendorff's alpha provide the highest flexibility of the available reliability measures with respect to number of raters and categories. Our aim was to investigate which measures and which confidence intervals provide the best statistical properties for the assessment of inter-rater reliability in different situations.