Estimates of association between exposures and diseases are often distorted by error in exposure classification. When the validity of exposure assessment is known, this can be used to adjust these estimates. When exposure is assessed by experts, even if validity is not known, we sometimes have information about interrater reliability. We present a Bayesian method for translating the knowledge of interrater reliability, which is often available, into knowledge about validity, which is often needed but not directly available, and applying this to correct odds ratios (OR).
MethodsThe method allows for inclusion of observed potential confounders in the analysis, as is common in regression-based control for confounding. Our method uses a novel type of prior on sensitivity and specificity. The approach is illustrated with data from a case-control study of lung cancer risk and occupational exposure to diesel engine emissions, in which exposure assessment was made by detailed job history interviews with study subjects followed by expert judgement.
ResultsUsing interrater agreement measured by kappas (), we estimate sensitivity and specificity of exposure assessment and derive misclassification-corrected confounder-adjusted OR. Misclassification-corrected and confounder-adjusted OR obtained with the most defensible prior had a posterior distribution centre of 1.6 with 95% credible interval (Crl) 1.1 to 2.6. This was on average greater in magnitude than frequentist point estimate of 1.3 (95% Crl 1.0 to 1.7).
ConclusionsThe method yields insights into the degree of exposure misclassification and appears to reduce attenuation bias due to misclassification of exposure while the estimated uncertainty increased.
http://ift.tt/2mEDiQf
Δεν υπάρχουν σχόλια:
Δημοσίευση σχολίου