next up previous contents
Next: Neural networks (history) Up: AI Lecture 2 Previous: Problems with classical (symbolic   Contents

Brain-like computational modelling: neural networks (concept)

Came to be known as connectionism.

A biological neuron receives information from other neurons through synaptic connections and passes signals to as many as a thousand other neurons. The synapses vary in strength.

Such a circuit can be replaced by a (simplified) artificial neural circuit in which the ``neurons'' are threshold amplifiers or some other ``units''; wires, resistors and capacitors replace synaptic connections; output voltage of the amplifier represents activity of the model neuron; currents represent information flow in the network.

We saw in the Seminar sec. [*] the structure of a neural network: Each unit receives through each of it's (already weighted) connections a real-valued excitatory or inhibitory input. The unit sums the inputs and gives an output which is a function (usually a threshold function) of its inputs. Thus the unit is a non-linear device.)

Both these systems can be mathematically modelled as a dynamical system: i.e. a system of several interacting parts whose state evolves continuously with time. (Not discrete states - then how modelled by a Turing Machine?)

In contrast the internal states of a digital computer between input and output are discrete.


next up previous contents
Next: Neural networks (history) Up: AI Lecture 2 Previous: Problems with classical (symbolic   Contents