Diagnostic Research : improvements in design and analysis

Created by W.Langdon from gp-bibliography.bib Revision:1.4340

  author =       "Cornelis Jan Biesheuvel",
  title =        "Diagnostic Research : improvements in design and
  school =       "Universiteit Utrecht",
  year =         "2005",
  address =      "Holland",
  ISBN =         "90-393-2706-8",
  keywords =     "genetic algorithms, genetic programming, diagnosis,
                 methodology, prediction research",
  URL =          "http://igitur-archive.library.uu.nl/dissertations/2005-0511-200047/",
  URL =          "http://igitur-archive.library.uu.nl/dissertations/2005-0511-200047/full.pdf",
  URL =          "http://igitur-archive.library.uu.nl/dissertations/2005-0511-200047/title.pdf",
  URL =          "http://igitur-archive.library.uu.nl/dissertations/2005-0511-200047/contents.pdf",
  URL =          "http://igitur-archive.library.uu.nl/dissertations/2005-0511-200047/c1.pdf",
  URL =          "http://igitur-archive.library.uu.nl/dissertations/2005-0511-200047/c2.pdf",
  URL =          "http://igitur-archive.library.uu.nl/dissertations/2005-0511-200047/c3.pdf",
  URL =          "http://igitur-archive.library.uu.nl/dissertations/2005-0511-200047/c4.pdf",
  URL =          "http://igitur-archive.library.uu.nl/dissertations/2005-0511-200047/c5.pdf",
  URL =          "http://igitur-archive.library.uu.nl/dissertations/2005-0511-200047/c6.pdf",
  URL =          "http://igitur-archive.library.uu.nl/dissertations/2005-0511-200047/c7.pdf",
  URL =          "http://igitur-archive.library.uu.nl/dissertations/2005-0511-200047/c8.pdf",
  URL =          "http://igitur-archive.library.uu.nl/dissertations/2005-0511-200047/sum.pdf",
  URL =          "http://igitur-archive.library.uu.nl/dissertations/2005-0511-200047/sam.pdf",
  URL =          "http://igitur-archive.library.uu.nl/dissertations/2005-0511-200047/dank.pdf",
  URL =          "http://igitur-archive.library.uu.nl/dissertations/2005-0511-200047/cv.pdf",
  size =         "103 pages",
  abstract =     "In the era of evidence-based medicine, diagnostic
                 procedures also need to undergo critical evaluations.
                 In contrast to guidelines for randomised trials and
                 observational etiologic studies, principles and methods
                 for diagnostic evaluations are still incomplete. The
                 research described in this thesis was conducted to
                 further improve the methods for design and analysis of
                 diagnostic studies.

                 In the past, most diagnostic accuracy studies followed
                 a univariable or single test approach with the aim to
                 quantify the sensitivity, specificity or likelihood
                 ratio. However, single test studies and measures do not
                 reflect a test's added value. It is not the singular
                 association between a particular test result or
                 predictor and the diagnostic outcome that is
                 informative, but the test's value independent of
                 diagnostic information. Multivariable modelling is
                 necessary to estimate the value of a particular test
                 conditional on other test results. However, diagnostic
                 prediction rules are not the solution to everything.
                 They have certain drawbacks, such as overoptimistic
                 accuracy when applied to new patients. Recently,
                 methods have been described to overcome some of these
                 drawbacks. Typically, in diagnostic research one
                 selects a cohort of patients with an indication for the
                 diagnostic procedure at interest as defined by the
                 patients' suspicion of having the disease of interest.
                 The data are analysed cross-sectionally. When
                 appropriate analyses are applied, results from nested
                 case-control studies should be virtually identical to
                 results based on a full cohort analysis. We showed that
                 the nested case-control design offers investigators a
                 valid and efficient alternative for a full cohort
                 approach in diagnostic research. This may be
                 particularly important when the results of the test
                 under study are costly or difficult to collect.

                 It is suggested that randomised controlled trials
                 deliver the highest level of evidence to answer
                 research questions. The paradigm of a randomised study
                 design has also been applied to diagnostic research. We
                 described that a randomised study design is not always
                 necessary to evaluate the value of a diagnostic test to
                 change patient outcome. A test's effect on patient
                 outcome can be inferred and indeed considered as
                 quantified -using decision analysis- 1) if the test is
                 meant to include or exclude a disease for which an
                 established reference is available, 2) if a
                 cross-sectional accuracy study has shown the test's
                 ability to adequately detect the presence or absence of
                 that disease based on the reference, and finally 3) if
                 proper, randomised therapeutic studies have provided
                 evidence on efficacy of the optimal management of this
                 disease. In such instances diagnostic research does not
                 require an additional randomised comparison between two
                 (or more) 'test-treatment strategies' (one with and one
                 without the test under study) to establish the test's
                 effect on patient outcome. Accordingly, diagnostic
                 research -including the quantification of the effects
                 of diagnostic testing on patient outcome- may be
                 executed more efficiently.

                 Diagnostic research aims to quantify a test's added
                 contribution given other diagnostic information
                 available to the physician in determining the presence
                 or absence of a particular disease. Commonly,
                 diagnostic prediction rules use dichotomous logistic
                 regression analysis to predict the presence or absence
                 of a disease. We showed that genetic programming and
                 polytomous modelling are promising alternatives for the
                 conventional dichotomous logistic regression analysis
                 to develop diagnostic prediction rules. The main
                 advantage of genetic programming is the possibility to
                 create more flexible models with better discrimination.
                 This is especially important in large data sets in
                 which complex interactions between predictors and
                 outcomes may be present.",
  abstract =     "Using polytomous logistic regression, one can directly
                 model diagnostic test results in relation to several
                 diagnostic outcome categories. Simultaneous prediction
                 of several diagnostic outcome probabilities
                 particularly applies to situations in which more than
                 two disorders are considered in the differential
                 diagnoses. As this is commonly the case, polytomous
                 regression analysis may serve clinical practice better
                 than conventional dichotomous regression analysis. Both
                 alternatives deserve closer attention in future
                 diagnostic research.

                 We also showed that the development of a diagnostic
                 prediction rule is not the end of the 'research line',
                 even when a rule is subsequently adjusted for optimism
                 using internal validation techniques e.g. bootstrap
                 techniques. External validation of such rules in new
                 patients is always required before introducing a rule
                 in daily practice. This indicates that internal
                 validation of prediction models may not be sufficient
                 and indicative for the model's performance in future
                 patients. Rather than viewing a validation data set as
                 a separate study to estimate an existing rule's
                 performance, validation data may be combined with data
                 of previous derivation studies to generate more robust
                 prediction models using recently suggested methods.",
  notes =        "* Title

                 * Contents

                 * Chapter 1: Introduction

                 * Chapter 2: Test research versus diagnostic research

                 * Chapter 3: Distraction from randomisation in
                 diagnostic research

                 * Chapter 4: Reappraisal of the nested case-control
                 design in diagnostic research: updating the STARD

                 * Chapter 5: Validating and updating a prediction rule
                 for neurological sequelae after childhood bacterial

                 * Chapter 6: Genetic programming or multivariable
                 logistic regression in diagnostic research

                 * Chapter 7: Revisiting polytomous regression for
                 diagnostic studies

                 * Chapter 8: Concluding remarks

                 * Summary

                 * Samenvatting

                 * Dankwoord

                 * Curriculum Vitae

                 * Volledig proefschrift (520 kB)

                 OMEGA KiQ Ltd.",

Genetic Programming entries for Cornelis Jan Biesheuvel