Nature inspired algorithms

CSC 302 1.5 Neural Networks Introduction

 Mimic

(or model) the nature

– Brain (Neural networks) – Genetics (Genetic algorithms) – Ant colonies (Ant colony algorithms) – School of fish, fish flock of birds (Particle swarm intelligence) – Annealing in metals (Simulated annealing)

1

Artificial Neural Network (ANN)

2

Artificial Neural Network (ANN) …

 Highly

interconnected network of information--processing elements that information mimics the connectivity y and functioning of the human brain.  Addresses p problems difficult for traditional computers. – E.g. Speech and pattern recognition  Strength

– Ability to learn from a limited set of examples. 3

4

Differences Between Neural Networks and Traditional Computers C 

Differences Between Neural Networks And Traditional Computers C …

Traditional computers – Personal computers, Workstations, Mainframes

Neural Networks

Traditional Computers

Use large number of simple processors to do their calculations.

Use one or a few extremely complex processing units.

Neural Networks

Traditional Computers

Memory is distributed in the form of the weights given to the various connections. Damage to the part of the network does not necessarily result in processing malfunction or information loss.

Do not have centrally located memory, nor they programmed with a sequence of instructions.

NN is taught with a limited set of training g examples. p

The information processing of a NN is distributed throughout the network from its processors and connections.

Programs are written as a series of instructions.

Respond to information sets that it has never encountered before. The values of the connection weights can be taught as a ‘program’. 5

Implementation

Why Neural Networks?  Scientists

tried to use machines effectively for tasks relatively simple to human. human

Simulated on traditional computers.  Advantage 

– Computers can be easily reprogrammed to change the architecture or learning rule. 

6

– E.g.

Since NN is massively parallel, speed of a NN can be increased by using parallel computers.

 Recognizing

a character. character  Distinguishing a cat from a bird.  Achieve a g goal satisfying y g certain constraints.

– Link together hundreds or thousands of CPUs in parallel

7

8

Why Neural Networks? … 

Why Neural Networks? …

Machine ac e learning ea g – An adaptive mechanism that enable computers to learn … from experience, experience by example and  by analogy.  

– E.g., E g A street light like a star. star Both provide light at night, both are in predictable locations, both are overhead, and both serve no function in the daytime.

– Learning capabilities (of an intelligence system) are improved over time. – Basis for adaptive systems – Neural networks and Genetic algorithms

 Began

approximately 65 years ago.

– McCullochMcCulloch-Pitts neurons (1943) – Warren McCulloch and Walter Pitts designed the first neural network. – Found that combining many simple neurons into neural system was the source of increased computational power. power  Limitations

encountered in traditional sequential computing.

9

10

Why Neural Networks? …

What is a Neural Network?

 Neural

nets are of interest to many research areas.

A

collection of interconnected neurons that incrementally learn from their environment (data) to capture essential linear and nonlinear trends in complex data, data so that it provides reliable predictions for new situations containing even noisy and partial information.

– Signal processing and control theory – Robotics – Pattern recognition – Finding the explicit form of the relationship among certain variables. variables

11

12

What is a Neural Network? …

What is a Neural Network? …

 An

Artificial Neural Network (ANN) is an informationinformation-processing system that has certain characteristics in common with biological neural networks. networks

The brain consists of interconnected set of nerve cells (basic information information--processing units, neurons).  Human brain incorporates nearly 

– 100 billion neurons (1 billion = 109) – Typical neurons in the brain are connected to on the order of 10 10,000 000 – Other neurons, with some types of neurons having more than 200,000 connections.



By B using i millions illi off neurons, brain b i can perform its functions much faster than the fastest computers p in the market.

13

14

What is a Neural Network? … 

What is a Neural Network? …

Biological neuron



A neuron consists of – a cell body ((soma soma)) – a number of fibres ((dendrites dendrites)) – a single long fibre ((axon axon))

The dendrites receives signals (electric impulses) from other neurons.  The signals are transmitted across a synaptic gap by means of a chemical process. 

15

16

What is a Neural Network? … 

What is a Neural Network? …  Brain

can be considered as a highly complex, nonlinear and parallel information--processing system. information system

The chemical transmitter modifies the incoming signal. – Typically, scaling g the frequency of the signals g that are received. – Similar to the action of weights in an ANN.

– Information is stored and processed simultaneously throughout the whole network, rather than at specific locations. – In other words, both data and processing are global rather than local.

h soma sums the h incoming signals. l The  When sufficient input is received, the soma fi fires a signal i l over its it axon tto the th other neurons. 

17

18

What is a Neural Network? …

What is a Neural Network? …

 Plasticity



– Neurons demonstrate longlong-term changes in the strength of their connections. – Neurons can can form new connections with other neurons. y migrate g – Entire collection of neurons may from one place to another. g in the brain. – Basis for learning 19

Key ey features eatu es bo borrowed o ed from o b biological o og ca neurons: – The processing element receives many signals. – Signals Si l may be b modified difi d by b a weight i h at the h receiving synapse. processing g element sums the weighted g – The p input. – Under appropriate circumstances (sufficient input) the neuron transmits a single output input), output. – The output from a particular neuron may go to many other neurons (the axon branches) 20

What is a Neural Network? …  Fault

What is a Neural Network? …

tolerance

fi yuo cna raed tihs tihs, yuo hvae a sgtrane mnid too too.

– Able to recognize many input signals that are somewhat different from any signals we have seen before.

Cna yuo raed tihs? Olny 55 plepoe can. i cdnuolt blveiee taht I cluod aulaclty uesdnatnrd waht I was rdanieg. The phaonmneal pweor of the hmuan mnid, aoccdrnig to a rscheearch at Cmabrigde Uinervtisy, it dseno't mtaetr in waht oerdr the ltteres in a wrod are, the olny iproamtnt tihng is taht the frsit and lsat ltteer be in the rghit pclae. The rset can be a taotl mses and you can sitll raed it whotuit a pboerlm. Tihs is bcuseae the huamn mnid deos not raed ervey lteter by istlef, but the wrod as a wlohe. Azanmig huh? yaeh and I awlyas l ttghuhot h h t slpeling l li was iipmorantt! tt!

 E.g.

Ability to recognize a person after a long period of time. time

– Able to tolerate damage to the neural system y itself.  In

case of loss or damage of neurons, other neurons can sometimes be trained to take over the functions of lost or damaged cells. 21

What is a Neural Network? …

22

How do ANNs model the brain? An ANN consists of a large g number of very y simple processing elements called neurons,, units, neurons units, cells or nodes. nodes.  The neurons are connected by weighted links passing signals from one neuron to another.  Each E h neuron receives i a number b off input i t signals through its connections.  However, it never produces more than a output signal. signal.

 ANN

resembles the human brain much as a paper plane resembles a supersonic jet jet.



23

24

How do ANNs model the brain? …

How do ANNs model the brain? …

The output p signal g is transmitted through g the neuron’s outgoing connection (corresponding to the biological axon).  The outgoing signal splits into a number of branches that transmit the same signal.

Input signals



Output signals

x1 x2

w1 w2

y Neuron

– The signal is not divided among these branches in anyway. anyway



Weights

wn

The outgoing branches terminate at the incoming connections of other neurons in th network. the t k

y

y y

xn Diagram of a neuron

25

26

How do ANNs model the brain? …

How do ANNs model the brain? …  

Input signals

s Output signals







– Several hidden layers, – Feedback loops p

Output layer

Input layer

One of the most NN architecture has three layers. y First layer (Input layer) – is the only layer exposed to external signals. Hidden layer – input layer transmits signals to the hidden layer which extracts relevant features or patterns from the received signals. O t t llayer – those Output th features f t that th t are considered id d important are then transmitted to the output layer. Complex NNs may have

Hidden layer 27

28

How do ANNs model the brain? …  Activation

or Activity Level - is an internal state of a neuron.



– A function of the inputs it has received. received

E.g.

– Activations of X1, X2, and X3 are x1, x2, and x3 respectively. – The net input of (yin) of Y is the sum of the weighted i ht d signals i l ffrom X1, X2, and d X3. – i.e. yin= w1x1 + w2x2 + w3x3 – The activation of Y is a function of its net input, y = f(yin). – E.g. Logistic sigmoid function 

f(x) ( ) = 1 / ((1+e-x)

29

30

Training Set  NN

 E.g.

– If Y is connected to neuron Z1 and Z2, Y sends its signal to each of these units. g , the values – However,, in general, received by neurons Z1 and Z2 are different. Why? 31

are trained by presenting them with desired inputinput-output training sets.  Initially, connection weights are q random or equal.  Inputs are entered into the input y of NN. layer  Output signal is computed and p to the target g output. p compared 32

Training Set …

Training Set …

Small S a adjustments adjust e ts are a e then t e made ade to the t e connection weights to reduce difference between the target and computed outputs. outputs  The inputinput-output set is again presented to the NN and further changes are made to the connection weights.  After repeating this process many times for all inputinput-output patterns in the training set, the NN learns to respond in the desired manner. manner 

A NN is s sa said d to have a e learnt ea t when e itt can ca correctly perform the tasks for which it has been trained.  After Af a NN h have been b trained, i d it i can correctly process new data that they have not encountered before.  Training rule – is used to make adjustments to connection weights to reduce the difference b/w computed and target outputs. 

– Back Back--propagation rule

33

Where Are NN Being Used?  Pattern

34

Where Are NN Being Used? …

Recognition



Medicine – An auto auto--associative memory NN has been used to store large number of medical records. – Each of these records include

– Many interesting problems fall into the general area of pattern recognition. recognition – E.g. Automatic recognition of handwritten characters (digits or letters). purpose p multilayer y neural nets – GeneralGeneral-p with backback-propagation have been used for recognizing handwritten zip codes.

Symptoms  Diagnosis  Treatment for a particular case. 

– After training, the net can be presented with input consisting of a set of symptoms. – It will then find the ‘best’ diagnosis and treatment treatment. 35

36

Where Are NN Being Used? …  Speech

Production



Speech p Production … – NETtalk – A multi multi--layer neural net (with hidden units). – Requirements

– Learning to read English text aloud is a difficult task. task – Correct pronunciation of a letter depends on the context the letter appears.  E.g.

Where Are NN Being Used? …



Set of examples of written input – the letter that is currently being spoken – Three letters before and after it (context)



but, cut, put

Correct pronunciation for it

– Additi Additionall symbols b l are used d to t indicate i di t the th end of a word or punctuation.

37

Where Are NN Being Used? …  Speech

Production …

38

Where Are NN Being Used? …  Business

– The net is trained using the 1,000 most common English g words. – After training, the net can read new words with very y few errors.

– Chase Manhattan Bank used a NN to examine about the stolen credit cards.  Discovered

that most suspicious sales were for women’s shoes costing b/w $40 and $80. $80

39

40

Where Are NN Being Used? …  Business

 Neurons



are arranged g in layers. y

– Neurons in the same layer behave in the same manner. – Behaviour of a neuron are

– Mortgage risk assessment  Training

Net Architecture

input includes

– Applicant’s years of employment – Number of dependents – Current income, income etc. etc

 Its

 The

target output from the net is an ‘accept’ or ‘reject’ j the mortgage g g application. pp

activation function  Pattern of weighted connections connections, it sends and receives signals.

– Within each layer, y , neurons usually y have  The

same activation function  The same pattern of connections to other neurons 41

42

Single Layer/Multilayer

Single Layer

 Input

layer is not considered for determining the number of layers.  The number of layers in the net =The number of weighted interconnect links b/w the layers of neurons.

 Single

Layer

– Has one layer of connection weights. – Units can be distinguished as  Input

units  Output p units

43

44

Multilayer

Multilayer

 Multilayer

 Can

solve more complicated problems.  Training may be more difficult. difficult  However, in some cases, training may be more successful. successful

– Has one or more layers of nodes b/w the input p units and the output p units.

– Can be trained to solve a problem that a single layer net cannot be trained to perform correctly at all.

45

Setting the Weights

46

Supervised Training

 The

method of setting the values of the weights (training) is an important characteristic of NNs.  Two types of training

 Learning

with a teacher.  Accomplished by providing a sequence of training vectors vectors, or patterns.  Each training vector associated a target output vector.  The Th weights i h are then h adjusted dj d according to a learning algorithm.

– Supervised – Unsupervised – Reinforcement

47

48

Unsupervised Training

Reinforcement Training

A sequence q of input p vectors is provided. p  But target vectors are not specified.  The net modifies the weights so that the most similar input vectors are assigned to the same output unit.  Can learn to discover unknown clusters. 

– For example, they may cluster similar species, groups, protein structures, etc

Exact answer is not p presented to the network.  Indicates output generated from the network is right or wrong.  Learning with a critic.  Network uses this information to improve its performances.  Useful when the knowledge required to apply supervised learning is not available. 

49

Common Activation Functions

50

Common Activation Functions …

 Typically,

same activation function is used for all neurons in a particular layer.  Identity function

 Binary

step function

(with threshold θ)

x – f(x) = x for all x. – Activation function for input units. units

51

52

Common Activation Functions …  Binary

sigmoid

53

01 CSC 302 1.5 Neural Networks Introduction [4SPP].pdf ...

Began approximately 65 years ago. – McCulloch McCulloch-Pitts neurons (1943) Pitts neurons (1943). – Warren McCulloch and Walter Pitts. designed the first neural network. – Found that combining many simple Found that combining many simple. neurons into neural system was the. source of increased computational.

363KB Sizes 0 Downloads 157 Views

Recommend Documents

Neural Networks - GitHub
Oct 14, 2015 - computing power is limited, our models are necessarily gross idealisations of real networks of neurones. The neuron model. Back to Contents. 3. ..... risk management target marketing. But to give you some more specific examples; ANN ar

CSc 3200 Introduction to Numerical Methods
Introduction to Numerical Methods. Instructor. : Fikret Ercal - Office: CS 314, Phone: 341-4857. E-mail & URL : [email protected] http://web.mst.edu/~ercal/index.html. Office Hours : posted on the class website. **If there is no prior notice and the inst

Recurrent Neural Networks
Sep 18, 2014 - Memory Cell and Gates. • Input Gate: ... How LSTM deals with V/E Gradients? • RNN hidden ... Memory cell (Linear Unit). . =  ...

Intriguing properties of neural networks
Feb 19, 2014 - we use one neural net to generate a set of adversarial examples, we ... For the MNIST dataset, we used the following architectures [11] ..... Still, this experiment leaves open the question of dependence over the training set.

Neural Graph Learning: Training Neural Networks Using Graphs
many problems in computer vision, natural language processing or social networks, in which getting labeled ... inputs and on many different neural network architectures (see section 4). The paper is organized as .... Depending on the type of the grap

lecture 17: neural networks, deep networks, convolutional ... - GitHub
As we increase number of layers and their size the capacity increases: larger networks can represent more complex functions. • We encountered this before: as we increase the dimension of the ... Lesson: use high number of neurons/layers and regular

Learning Methods for Dynamic Neural Networks - IEICE
Email: [email protected], [email protected], [email protected]. Abstract In .... A good learning rule must rely on signals that are available ...

Adaptive Incremental Learning in Neural Networks
structure of the system (the building blocks: hardware and/or software components). ... working and maintenance cycle starting from online self-monitoring to ... neural network scientists as well as mathematicians, physicists, engineers, ...

Genetically Evolving Optimal Neural Networks - Semantic Scholar
Nov 20, 2005 - URL citeseer.ist.psu.edu/curran02applying.html. [4] B.-T. Zhang, H. Mühlenbein, Evolving optimal neural networks using genetic algorithms.

Simon Haykin-Neural Networks-A Comprehensive Foundation.pdf ...
Simon Haykin-Neural Networks-A Comprehensive Foundation.pdf. Simon Haykin-Neural Networks-A Comprehensive Foundation.pdf. Open. Extract. Open with.

Scalable Object Detection using Deep Neural Networks
neural network model for detection, which predicts a set of class-agnostic ... way, can be scored using top-down feedback [17, 2, 4]. Us- ing the same .... We call the usage of priors for matching ..... In Proceedings of the IEEE Conference on.

Genetically Evolving Optimal Neural Networks - Semantic Scholar
Nov 20, 2005 - Genetic algorithms seem a natural fit for this type .... For approaches that address network architecture, one termination criteria that has been.

Programming Exercise 4: Neural Networks Learning - csns
set up the dataset for the problems and make calls to functions that you will write. ... The training data will be loaded into the variables X and y by the ex4.m script. 3 ..... One way to understand what your neural network is learning is to visuali

neural networks and the bias variance dilemma.pdf
can be learned? ' Nonparametric inference has matured in the past 10 years. There ... Also in Section 4, we will briefly discuss the technical issue of consistency, which .... m classes, 3 e {1,2.....rr1}, and an input, or feature, vector x. Based o

Deep Learning and Neural Networks
Online|ebook pdf|AUDIO. Book details ... Learning and Neural Networks {Free Online|ebook ... descent, cross-entropy, regularization, dropout, and visualization.

Volterra series and neural networks
Towards a Volterra series representation from a Neural Network model. GEORGINA ... 1), for different voltages combinations, ..... topology of the network on Fig.

DEEP NEURAL NETWORKS BASED SPEAKER ...
1National Laboratory for Information Science and Technology, Department of Electronic Engineering,. Tsinghua .... as WH×S and bS , where H denotes the number of hidden units in ..... tional Conference on Computer Vision, 2007. IEEE, 2007 ...

Explain Images with Multimodal Recurrent Neural Networks
Oct 4, 2014 - In this paper, we present a multimodal Recurrent Neural Network (m-RNN) model for generating .... It needs a fixed length of context (i.e. five words), whereas in our model, ..... The perplexity of MLBL-F and LBL now are 9.90.

Inverting face embeddings with convolutional neural networks
Jul 7, 2016 - of networks e.g. generator and classifier are training in parallel. ... arXiv:1606.04189v2 [cs. ... The disadvantage, is of course, the fact that.