Page - 29 - in Applied Interdisciplinary Theory in Health Informatics - Knowledge Base for Practitioners
Image of the Page - 29 -
Text of the Page - 29 -
2. We are ignoring uncertainties in the veracity of a message that might be
introduced by the communication pathway of Figure 2.
Point 2 is critically important when taking this into a real clinical setting. However,
for simplicity of exposition we will continue to ignore this issue until the concluding
section.
3. Relative entropy and diagnostic tests
Let us phrase the diagnostic strategy a little more formally. A patient’s specific internal
state has an ensemble of messages M = (m, AM, PM) associated with it. A message will
normally be triggered by a specific “interrogation” being performed on the patient. An
interrogation may be, for example: a question asked of the patient; a test performed on
the patient; an inspection performed by a nurse.
Prior to an interrogation the alphabet AM of messages will have a probability
distribution QM over it. Receipt of a message mk (a positive test result, for example) will
result in a posterior probability PM over the alphabet of messages.
To measure the change in entropy, we use the relative entropy, or Kullback-Leibler
divergence, between the two probability distributions [7]:
Eq 3. ܦ
ሺ ெܲȁȁܳெሻൌσ
݈ ݃ଶ
ೖ
ೖ
ୀଵ
It is worth noting two properties of relative entropy. Firstly, it satisfies what is
known as Gibbs’ inequality, with equality if and only if PM = QM:
Eq 4. ܦ
ሺ ெܲȁȁܳெሻ Ͳ
Secondly, in general it is not symmetric under interchange of the two probability
distributions. That is, ܦ
ሺ ெܲȁȁܳெሻ്ܦ
ሺܳெȁȁ ெܲሻ . Consequently, relative
entropy/Kullback-Leibler divergence does not formally qualify as a measure (hence the
use of the term “divergence”).
Expressed in terms of Bayesian inference, DKL(P||Q) is a measure of the information
gained when a physician’s beliefs are revised from a prior Q to a posterior P following
some investigation.
We will use a hypothetical example adapted from [2] to illustrate the approach so
far. We hypothesise a population of patients with arthritis, framed with a prior probability
distribution over four possible syndromes. We have two diagnostic tests that are
available to us, t1 and t2. Table 1 provides the pre-test probabilities and the respective
post-test probabilities following a positive outcome from each of the two tests. Which of
the two tests provides the greater information gain?
P.Krause / InformationTheoryandMedicalDecisionMaking 29
back to the
book Applied Interdisciplinary Theory in Health Informatics - Knowledge Base for Practitioners"
Applied Interdisciplinary Theory in Health Informatics
Knowledge Base for Practitioners
- Title
- Applied Interdisciplinary Theory in Health Informatics
- Subtitle
- Knowledge Base for Practitioners
- Authors
- Philip Scott
- Nicolette de Keizer
- Andrew Georgiou
- Publisher
- IOS Press BV
- Location
- Amsterdam
- Date
- 2019
- Language
- English
- License
- CC BY-NC 4.0
- ISBN
- 978-1-61499-991-1
- Size
- 16.0 x 24.0 cm
- Pages
- 242
- Category
- Informatik