Seite - 30 - in Applied Interdisciplinary Theory in Health Informatics - Knowledge Base for Practitioners
Bild der Seite - 30 -
Text der Seite - 30 -
Table 1. Hypothetical Example (adapted from [2]).
Candidate Diagnosis Pre-test Probability
(t0) Post-test Probability
(t1) Post-test
Probability (t2)
Gout 0.25 0.5 0.4
Osteoarthritis 0.5 0 0.1
Pseudogout 0.125 0.5 0.4
Other possibilities 0.125 0 0.1
Using Equation 3, it is straightforward to calculate that the information gain from t1
is 1.5 bits, whereas the information gain had we chosen to perform t2 would have been
0.68 bits (to 2 d.p.). Note again, that we make the assumption that the probabilities are
continuous and so 0.log2(0) = 0. So, in the first case we have:
DKL(t1||t0) = 0.5xlog2(0.5/0.25) + 0.5xlog2(0.5/0.125) = 0.5x1 + 0.5x2 = 1.5
In the second case we have:
DKL(t2||t0) = 0.4xlog2(0.4/0.25) + 0.1xlog2(0.1/0.5) + 0.4xlog2(0.4/0.125)
+ 0.1xlog2(0.1/0.125)
= 0.4x0.6781 + 0.1x(-2.322) + 0.4x1.678 + 0.1x(-0.322)
= 0.68 (to 2 d.p.)
The question naturally arises: why use relative entropy and not merely the difference
of the pre-test and post-test entropies as measured using Equation 2. The latter was
indeed proposed in early discussions on the use of entropy in medical decision making.
However, Asch, Patton and Hershey concluded that it “fails to capture reasonable
intuitions about the quantity of information provided by diagnostic tests” [1]. This point
was reiterated in [2], which shows that relative entropy captures those intuitions more
effectively. Kullback and Leibler [5], of course, provide a more formal justification of
what we are calling relative entropy, as a sufficient statistic for discriminating between
two probability distributions.
Let us now take a look at how these concepts from information theory might act as
aids in medical decision making.
4. Shannon entropy and binary outcomes
Many laboratory tests are designed to assess the presence or absence of a disease state;
a binary outcome. We can take a coin flip as a reference point, with the outcomes being
heads or tails. Now, consider a collection of coins that are biased to some extent. That is,
each coin will have a probability p that the outcome is a heads, with p varying over the
collection between 0 and 1.
For a given coin C, from Equation 2 noting that the probability of a tails will then
be (1 – p), entropy is:
Eq 5. ܪሺܥሻൌെ ൈ݈ ݃ଶሺ ሻെሺͳെ ሻ݈ ݃ଶሺͳെ ሻ
P.Krause /
InformationTheoryandMedicalDecisionMaking30
zurück zum
Buch Applied Interdisciplinary Theory in Health Informatics - Knowledge Base for Practitioners"
Applied Interdisciplinary Theory in Health Informatics
Knowledge Base for Practitioners
- Titel
- Applied Interdisciplinary Theory in Health Informatics
- Untertitel
- Knowledge Base for Practitioners
- Autoren
- Philip Scott
- Nicolette de Keizer
- Andrew Georgiou
- Verlag
- IOS Press BV
- Ort
- Amsterdam
- Datum
- 2019
- Sprache
- englisch
- Lizenz
- CC BY-NC 4.0
- ISBN
- 978-1-61499-991-1
- Abmessungen
- 16.0 x 24.0 cm
- Seiten
- 242
- Kategorie
- Informatik