Seite - 96 - in Limina - Grazer theologische Perspektiven, Band 3:2
Bild der Seite - 96 -
Text der Seite - 96 -
96 | www.limina-graz.eu
Sara Lumbreras and Lluis Oviedo | Belief networks as complex systems
or prediction. The programmer does not input any specific rules into the
network. On the contrary, the system “learns” by means of example. For
instance, an ANN can be trained to identify pictures of cars, without know-
ing that cars have four wheels, seats, or a trunk. Instead, the network re-
ceives pictures that have been labelled as “car” or “not a car”, and infers
what the underlying characteristics of automobiles are.
This is what we call supervised learning, as the system receives items that
have been correctly classified. In other words: the network recognizes the
pattern that appears in the examples it receives. ANNs have been remark-
ably successful in performing difficult tasks such as computer vision, speed
recognition, machine translation and medical diagnosis. All these appli-
cations correspond to classification problems, where we need to identify
which set of categories a new observation belongs to. There is a second, no
less important type of problem known as forecasting. In forecasting, the
network detects the patterns that underlie the time-dependent evolution
of a variable and predict their unfolding. ANNs have also excelled at this
task.
There are many different flavours of networks, which have been proven to
have varying strengths. Normally, these networks are structured in several
layers, where some of the units are in direct contact with the input they
receive, others constitute the output and the remaining ones stay hidden.
Each of the neurons receives an input from the set of neurones that are
connected to it and it uses this input to generate a single output by means of
a relatively simple function, usually just a linear combination of the inputs
and some weights. This linear combination is passed through an activation
function that maps it into the interval [0,1]. The hyperbolic tangent or the
sigmoidal function are some of the main activation functions used in ANNs.
These simple calculations provide the framework for the ANN. The weights
that will define each neuron are calculated by the application of what is
known as a training algorithm. All training algorithms start by allocating
random starting weights to the network that will be progressively adjusted
taking into account the errors that they create in the outcome. Backpropa-
gation is the method that calculates how a current error is related to each of
the weights and how they should be adjusted. It is efficient and can be ex-
ecuted with short computation times, so that it is possible to quickly obtain
The programmer does not input specific rules into the network.
The system “learns” by means of example.
Limina
Grazer theologische Perspektiven, Band 3:2
- Titel
- Limina
- Untertitel
- Grazer theologische Perspektiven
- Band
- 3:2
- Herausgeber
- Karl Franzens University Graz
- Datum
- 2020
- Sprache
- deutsch
- Lizenz
- CC BY-NC 4.0
- Abmessungen
- 21.4 x 30.1 cm
- Seiten
- 270
- Kategorien
- Zeitschriften LIMINA - Grazer theologische Perspektiven