LOCALIZED LEARNING WITH THE ADAPTIVE BIAS PERCEPTRON Ryanne Thomas Dolan, CS University of Missouri-Columbia

THE ABP

The human brain is composed of about 100 billion neurons. Study of the local interactions among neurons in the brain has inspired many artificial neural networks (ANNs), but such models are a far stretch from their natural counterparts.

RESEARCH AIMS Can we use mechanisms of selforganization to build a better neuron?

X1 W1 X2

The Perceptron

X3



W3

. . .

The Adaptive Bias Perceptron (ABP) is based on the simplest artificial neuron, the perceptron. However, the ABP uses internal information to “learn” the parameters W (weights) and θ (bias). The ABP mimics the biological phenomenon called adaptation, in which neurons adjust their sensitivity to inputs depending on past inputs.





TESTING SCENARIO ●







0 A B C

-0.2

-0.4

-0.6 -0.2

{

= y−

w i =  x i −w i 

Eqn 1: linear output function; separates the input space with a linear boundary Eqn 2: adaptation function; adjusts input sensitivity after each iteration Eqn 3: learning function; searches for target linear boundary β, adaptation gain; can decay over time to improve convergence η, learning gain; can decay over time to improve convergence ϵ, training error; used to drive ABP to desired linear mapping

Target Mapping

0.2

0

0.2

0.4

0.6

Test data sets provided by Dr. Jim Keller of MU ECE

1

1 if x⋅w y= −1 otherwise 3

1

w31 w32

3

w53

5

The ABP learns its bias differently than perceptron learning algorithms. The neuron's bias is continually adjusted in a pattern similar to how some biological neurons continually adapt their input sensitivity.

w41

2

w42

ABP networks tend to converge on a mapping which approximates the target mapping. As the network learns, each ABP adjusts its internal parameters using only local information.

Output Layer

w5

w54

4

Biases at Hidden Layer

0.6

training reward

0.4

0.2

0

Self-organizing Adaptive Bias Networks (ABNs) use a nonlinear version of the ABP. Each neuron in the network learns from local information only. The network as a whole can be driven to a target mapping using reinforcement training.

y j t =tanh

x ji t w ji t  − j t  ∑ [ ]

 j t =tanh

w kj t [ y j t −w kj t ] ∑ [ ]

-0.2

-0.4

-0.6 0

1000

2000

3000

4000

5000

Time (iterations)

I

K

3

 j t 1= j t t [ y j t − j t ]

4

w ji t 1=w ji t t  j t [ x ji t −w ji t ]

5



0.4

w2

2

2

Hidden Layer

y

{

Xn

Design an artificial neuron which learns only from internal information and local interactions with other neurons

Here we evaluate the network's ability to learn mappings by training it with overlapping cluster “clouds”, such as the ones below.



1 if x⋅w y= −1 otherwise

Wn



ANNs are often used to classify vector data into clusters because of their ability to learn complex mappings between vector spaces.

w1

W2

1

Demonstrate the new neuron's ability to learn individually and in cooperating networks

Input Layer

RESULTS

Bias

Self-organization is a biological phenomenon in which large networks of simple organisms (cells, termites, fish) exhibit complex behavior beyond the capabilities of any individual. Specifically, self-organization relies only on local interactions between individuals in the network.

ADAPTIVE BIAS NETWORKS

 t1 =  t

6

t 1= t 

Weights at Hidden Neuron

4

Weights

INTRODUCTION

2

0

-2







Eqn 1: nonlinear output function

-4 0

2000

3000

4000

5000

Time (iterations)

Eqn 2: local error function Eqn 3: adaptation function

1000

Resulting Mapping

0.4

0.2





Eqn 4: learning function Eqn 5: adaptation decay function



Eqn 6: learning decay function



α, decay factor, ~0.99

Feedback is provided immediately after a neuron fires. The error signal is not backpropagated through the network, and does not originate at the output layer. Instead, the network learns from simple reinforcement training in which the error at each output neuron is 0 when the network classifies correctly, and 1 when it makes mistakes.

0 A B C

-0.2

-0.4

-0.6 -0.2

0

0.2

0.4

0.6

CONCLUSION Inspired by biological self-organization and neural adaptation, the Adaptive Bias Network learns from reinforcement training without back-propagating error. This novel network's highly modular and decoupled topography is well suited to distributed or hardware implementations.

localized learning with the adaptive bias perceptron

Self-organization is a biological phenomenon in which large networks of simple organisms. (cells, termites, fish) exhibit complex behavior beyond the ...

431KB Sizes 1 Downloads 266 Views

Recommend Documents

No documents