www.must.or.kr

- Analogy to Biological Systems (Indeed a great example of a good learning system)
- Massive Parallelism allowing for computational efficiency
- The first learning algorithm came in 1959 (Rosenblatt) who suggested that if a target output value is provided for a single neuron with fixed inputs, one can incrementally change weights to learn to produce these outputs using the perceptron learning rule

- The n-dimensional input vector x is mapped into variable y by means of the scalar product and a nonlinear function mapping

For Example

- The ultimate objective of training

obtain a set of weights that makes almost all the tuples in the training data classified correctly

Steps Initialize weights with random values Feed the input tuples into the network one by one For each unit Compute the net input to the unit as a linear combination of all the inputs to the unit Compute the output value using the activation function Compute the error Update the weights and the bias