Spike-Timing Dependent Plasticity Learning for Visual-Based Obstacles Avoidance H´edi Soula1 and Guillaume Beslon2 1

2

LBM/NIDDK, National Institutes of Health, Bethesda, MD, 20892, USA PRISMA, National Institute of Applied Sciences, 69621 Villeurbanne Cedex, France

Abstract. In this paper, we train a robot to learn online a task of obstacles avoidance. The robot has at its disposal only its visual input from a linear camera in an arena whose walls are composed of random black and white stripes. The robot is controlled by a recurrent spiking neural network (integrate and fire). The learning rule is the spike-time dependent plasticity (STDP) and its counterpart – the so-called anti-STDP. Since the task itself requires some temporal integration, the neural substrate is the network’s own dynamics. The behaviors of avoidance we obtain are homogenous and elegant. In addition, we observe the emergence of a neural selectivity to the distance after the learning process.

1

Introduction

From a dynamical systems point of view, a behavior is a spatio-temporally structured relationship between an organism and its environment. The dynamical loops generated by the input/output flow and by the neural system are coupled together to produce a minimal cognition [1]. A non purely reactive architecture is then obtained through neural dynamics. Consequently, learning a behavior is a result from pairing (coupling) the dynamics of input/output loop with the dynamics generated by the artificial brain. However, even from that point of view, the problem still needs to be solved so far. Many architectures proposed dynamical process for temporal series learning. Most of the time, the learning procedures are off-line and supervised (their performances rely solely on a good foreseeing from the designer). An alternative was found in the genetic algorithm [2,3,4]. Unfortunately, both approaches still lack of on-line adaptation methods and offer no obvious path for enabling them. These facts will be our starting point. A neural controller must exhibit enough intrinsic features to encompass a wide range of dynamics. Learning will therefore be a plasticity mechanism that allows us to put constraints on particular (and interesting) dynamics depending on the experience of the agent. The mechanism of plasticity is inspired from biology: Spike-Time Dependent Plasticity. Using an over-simplistic scaling, we show that an agent can learn to avoid obstacles using only its visual flow. This approach provides an elegant solution to a non-trivial temporal task. We show also the emergence of a spiking selectivity to distance from obstacles when this distance is not provided as such to the agent. S. Nolfi et al. (Eds.): SAB 2006, LNAI 4095, pp. 335–345, 2006. c Springer-Verlag Berlin Heidelberg 2006 

336

2

H. Soula and G. Beslon

Dynamics of Integrate and Fire Neurons

The following series of equations describe the discrete leaky integrate and fire (I&F) model we use throughout this paper [5]. Each time a given neuron fires, a synaptic pulse is transmitted to all the other neurons. This firing occurs whenever the neuron potential V crosses a threshold θ from below. Just after the firing occurs, the potential of the neuron is reset to 0. Between a reset and a spike, the dynamics of the potential of a neuron (labelled i) is given by the following (discrete) temporal equation : Vi (t + 1) = γVi (t) +

N 

Wij δ(t − Tjn ) + I(t)

(1)

n>0 j=1

The first part of the right hand side of the equation describes the leak current – γ is the decay rate (1 − γ is the leak – 0 ≤ γ ≤ 1). I(t) is an external input current (up to some conductance constant). The Wij are the synaptic influences (weights) and δ(x) = 1 whenever x = 0 and 0 otherwise (Kronecker symbol). The Tjn are the times of firing of a neuron j and is a multiple of the sample discretization time. The times of firing are formally defined for all neurons i as Vi (Tin ) ≥ θ and the n-th firing time recursively as : Tin = inf(t | t > Tin−1 + ri , Vi (t) ≥ θ)

(2)

We set Ti0 = −∞ and ri is the refractory period of the neuron (which imposes a maximal frequency). Once it has fired, the neuron’s potential is reset to zero. Thus, when computing Vi (Tin + 1) we set Vi (Tin ) = 0 in equation (1). Leaky integrate and fire neurons are known to be good approximates of biological neurons concerning spiking time distribution. Moreover, they are simple enough and easy to handle when embedded in a robot. Populations coding and mean field techniques showed that spiking neural networks can display a broad variety of dynamics [6,7,8]. In simple case, sufficient conditions for phase synchronization and its stability were proposed in homogeneous networks [9,10]. In the precise case of integrate and fire neurons, equilibrium criteria have been calculated for networks of irregular firing neurons [11,12] and VLSI neurons [13,14]. Although not applicable directly to our problem, these works showed that, in case of random networks, the parameters of the distribution of synaptic weights can determine in which case the firing activity is conducted by the input (input or noise drift) or by the internal coupling (internal drift) of the neurons. To quantify this range of parameters, we described analytically the bifurcation map of totally connected networks at the limit of no input drift [15] – the so–called spontaneous mode. This bifurcation map allowed us to estimate rather precise conditions for a purely internal regime to occur. Consequently, these two types of drifts can impose quite different behaviors to an agent controlled by such networks. Indeed, a network whose dynamics rely solely on the input drift gives an agent with an input-led behavior: A reactive

Spike-Timing Dependent Plasticity Learning

337

Fig. 1. Differences of behavior when given a sinusoidal input according to the size of the coupling. The figures on the left are the temporal evolution of the average potential (over all the neurons) of the network. On the right, there are the corresponding power spectra in a log-log plot. Parameters: N = 100, θ = 1.0, γ = 0.99, Vrest = Vreset = 0.0, r = 4. The input signal is I(t) = 0.2 × sin(t/1200) giving a frequency around 8.5 Hz.

architecture. The network serves as a temporally stable filter between sensors and actuators. On the other hand, networks with an internal drifting mode use only its own dynamical properties to compute the output. Then, the agent experiences a stereo-typical (”autistic”) behavior. This is called an automatic architecture. In that case, the internal dynamics dominates the flow of input and is able to provide time-dependent responses. This evolution from external to internal drift-led activity is shown on figure 1. We compute the average potential of an all-to-all coupled network of I&F neurons when all neurons are submitted to a sinusoidal input. The weights are chosen randomly following a centered normal law. In that case, the internal drift increases with the standard deviation of the weight distribution [15]. The network response thus ranges from a passive filter (σ = 0.21 – carrying the input frequency) to a signal that completely ignores the intrinsic frequencies of the input (σ = 0.55). In the former case, the output will react on both the spectral and tonic part of the input signal. In the latter case, however, the network acknowledges only the tonic part. The intermediate value of coupling seems to be a combination of both extreme cases. The network output depends critically on both internal and external component. In robot control terms, this kind of coupling allows both adaptivity to input variation and memory of past.

338

H. Soula and G. Beslon

Since we obviously cannot simply increase the weight connectivity, a learning algorithm has therefore the task to make this coupling relevant for the situations experience by the robot.

3

The Learning Rule

We describe in this section the learning algorithm we used as well as the methodology for applying this learning rule. Recent neurobiology experiments have suggested that the relative timing of pre- and post-synaptic potentials plays an important role in determining the intensity as well as the sign of change of a synapse strength [16,17]. The intensity of this Long Term Potentiation (and Depression) is directly dependent of the relative timing – the spike delay between the post-synaptic and pre-synaptic neurons. In addition, if this delay is high enough (order of tens of milliseconds) no modification occurs. On the other hand, the modification is maximal when the post-synaptic neuron fires just after (or just before) the pre-synaptic does. As [16] put it, one can extract quite straightforwardly a very simple rule that rests upon inter-spikes delays. This ”rule” is known as Spike-Time Dependent Plasticity (STDP). It has become a widespread implementation of Hebb’s initial intuition on memory formation in the brain [18]. Therefore many STDP rules emerged from experiments (see [17]), we decide to use a simple one – namely an additive rule expressed as : ΔWij = α(Wmax − |Wij |)h(Δij ) where Δij is the difference between the last firing dates of post-synaptic neuron i and pre-synaptic j. h(t) is a function which depends on the axonal delay between j and i. α is a learning parameter. The ”delay” function is chosen as : ⎧ T −t 0T ⎪ ⎪ ⎩ −h(−t) t < 0 Here T is the time-out constant (i.e. the relative timing above which no modification occurs). Figure 2 shows the learning function αh(Δij ) (we chose T = 50 time steps). In addition to classical Hebbian learning properties, STDP relies on a precise temporal frame. Neurons trained with STDP act as coincidence detectors and synapses of neurons that fire in a precise temporal manner will be modified in order to reinforce that order. Moreover, STDP introduces competition between Hebbian plasticity. As such, STDP seems a good candidate for temporal learning and dynamical coupling [19]. However, a real application of this rule per se does not seem straightforward. In order to see this, let’s evaluate the effect of the rule on two neurons and introduce their crosscorrelation function : Cij (τ ) =

T 1  Si (k)Sj (k + τ ) T k=0

(3)

Spike-Timing Dependent Plasticity Learning

339

Fig. 2. Schematic description of the learning rule

where T is a window time and Si (t) is 1 whenever neuron i fires at time t and zero otherwise. We can then compute the evolution of the weights between the two neurons according to the learning law. It yields :  [h(k) (Cij (k) − Cij (−k))] (4) ΔWij = T k≥1

Due to the shape of h, the sum is, in fact, finite and there is no modification for neurons strictly correlated (Cij (τ ) = 0 for all τ = 0). Moreover, for uncorrelated neurons E(Cij (−τ )) = E(Cij (−τ )) (i.e the correlation forward is equal to the correlation backward), the average modification of weight is zero. Finally, periodic neurons with a phase φ of half the common period (Cij (τ ) = 0 for all τ = φ and Cij (φ) = Cij (−φ)) will have their weights unmodified. These are the two fixed points of the learning rule. The speed of convergence depends of |α| and the slope of h (that is T ). The sign of α determines the stability of the fixed points. For α > 0, the strict correlation is stable and the dephasing of half the period is unstable. This is strictly the opposite for α < 0 – the so–called ”anti–STDP” also observed in real (fish) brains [17]. However, in any case, if an unmodified learning process is maintained for too long, synapses will eventually saturate, leading to a bimodal weight distribution [20]. This situation is illustrated on figure 3–top where the evolution of the average potential for various values of α is displayed . Therefore, we are able to figure out the effect of the learning rule upon the coupling of the network and consequently on the overall behavior of the robot. Indeed, applying straightforwardly the learning rule with a constant learning parameter α will increase the coupling of the network. In that case, the resulting behavior for the robot will be ultimately a pure ”autistic” one, ignoring any change in the input pattern. This is not wanted. We need to introduce a mechanism that allows a regulation in the weight modification. The idea will consist of using a combination of both STDP and anti–STDP. Since the phase synchronization differs for the laws we expect to maintain the network in the range of intermediate coupling. As displayed on figure 3–bottom–left, we perform a flip experiment where the

340

H. Soula and G. Beslon

Fig. 3. Top: Evolution of the average potential for α ∈ {−0.1, −0.01, −0.001} (anti– stdp) on the left and α ∈ {0.1, 0.01, 0.001}. Bottom: Evolution of the average potential using, on the left, first STDP then anti-STDP (flip experiment) and, on the right, switching between the two laws every 1000 time steps (switch experiment) |α| = 0.01 in both experiments.

network is submitted first to STDP (for 100 000 time steps) then to anti-STDP (the remaining 100 000 time steps). This flipping of laws allows the network to remain in a safe zone. However, as figure 3–bottom–right shows it, the contribution of each of these two laws is not symmetrical. In the switch experiment (a higher frequency flipping in fact), the network increases its coupling. In other words, both laws couple the network and the STDP does it faster. Such considerations will compel us to make careful decisions as when to apply both laws.

4

The Experiment

We tested our approach on a task of obstacle avoidance with visual flow. The robot has to avoid walls using only its camera information. More precisely, there will be neither proximity sensors nor positioning device available for it. In addition, the environment will not allow the robot to extract any simple rule to compute its exact position. Obviously in order to accomplish this difficult task, the network must exhibit important internal loops since no static input provides enough information by itself. The simulated agent is round with a wheel at each side. It has two motors that control each wheel separately, allowing differential propulsion. It is also equipped with a linear camera of 64 pixels spanning 180 degrees (see figure 4– bottom–right). It is positioned in an arena with black and white vertical stripes of random size painted on the walls at irregular intervals (see figure 4–top). It is similar to the environment described in [4]. To simplify, we tested also the learning algorithm in an environment where the stripe sizes were equal.

Spike-Timing Dependent Plasticity Learning

341

Fig. 4. Top: the environment. Bottom left: the neural architecture. Bottom right: top view of the robot.

The controller of the robot is a spiking neural network with three layers of neurons. The first and third serve as sensors and motor neurons respectively. More precisely, each of the 64 pixels of the linear camera is associated with a neuron. These neurons are fed with an input current computed to give either 20Hz or 200Hz for white and black pixel respectively. The current value I is calculated to provide the 1−γ neuron a desired period P . This is done using the formula : I = θ 1−γ P . We recall that γ is the leak (decay rate) and θ the threshold. Both were constant throughout experiments and identical for all neurons (Parameters : γ = 0.99 and θ = 1.0). The 2 output neurons serve as motoneurons – one for each motor (left and right). The motor speeds are computed as a linear function of the corresponding neuron firing rate (over 20 time steps) in such a way that, if an output neuron does not fire in this time window, the motor goes rearward. The intermediate layer – the hidden layer – consists of all–to–all connected 100 neurons. Each input neuron has a connection to each hidden neurons and all hidden neurons project to output neurons. There is no direct connection from input to output layer (see figure 4–bottom–left). All the weights of the three layers are chosen randomly according to a normal distribution. We chose the distribution parameters in order to obtain, when given an average input (same number of black and white pixels), a balanced proportion of spiking activity coming from the input and from the internal (hidden) activity and an average behavior of near immobility. These parameters are the equivalent for the robot of the intermediate value of coupling described in section 2. (Parameters: μinput = 0.0, σinput = 0.05, μhidden = 0.0, σhidden = 0.09, μoutput = 0.04 and σoutput = 0.04 where μ and σ are the mean and standard deviation of the normal laws). The robot has two contradictory goals – moving forward while detecting and then avoiding the walls. In order to detect the walls, the robot has to move. It

342

H. Soula and G. Beslon

allows us to extract two ”physiological” relevant states for the agent: Moving and colliding. We’ve decided to apply the learning rule only on those two situations. In addition, as put in the previous section, these laws must be antagonistic keeping in mind that one is stronger than the other. Consequently, we’ve decided that when the robot moves in line, whether forward or rearward (i.e. when both motors speed were equal) we apply anti–STDP. When the robot hit a wall, we apply STDP. Indeed, since hitting a wall is a rarer event for the robot, we apply the strongest law in that case. Note that both laws reinforce the internal drift of the neural controller but with different phases. They are both learning rules. The absolute value of the scaling factor was |α| = 0.001.

5

Results

We drew ten random robots and compared the performance with or without learning for 100 000 time steps. The averaged number of collisions is shown in figure 5. In the learning experiment, the number of collisions experienced decreased indicating an obstacle avoidance behavior. Moreover figures 6–left shows that moving forward is not impeded and is even increasing for regular size stripes. This means that the robots are not still and do not turn around themselves. The obstacle avoidance behavior is then non trivial. Indeed, figure 7 shows the trajectory of a typical individual after the learning process (with regular stripes). One notes that the obstacle avoidance is not one hundred per cent perfect since the agent actually collides with the wall (on the upper right). Nevertheless, the remainder of the run is collision-free. We can notice that the agent actually uses a temporal integration of the visual input to stay away from the walls. The avoidance behavior consists of small rear and forward movements when close to the obstacle. Away from the walls, however, the movement is faster and smoother.

Fig. 5. Evolution of the number of collisions averaged over the ten robots. Experiment without learning is displayed with a dashed line while experiment with learning is displayed with a plain line. Left: with regular stripes. Right: with random stripes.

Spike-Timing Dependent Plasticity Learning

343

Fig. 6. Left: The average number of time the two motors speed are equal and > 0 for robots without learning (dashed line) and with learning (plain line). Right: average number of collisions for various noise value β ∈ {1.0, 0.8, 0.5, 0.0}.

Fig. 7. The trajectory of a typical individual after the learning process. The black rectangle is the arena, the dashed rectangle indicates the radius of the agent.

In order to test the robustness of the learning, we introduce noise in the camera input. When computing the input current, let I(t) = βIsignal + (1 − β)Zt where β is a noise factor and Zt a white noise. β = 1 corresponds to noise-free simulation while when β = 0 the input is made only of noise. We tested the ten robots after the learning process during 50 000 time steps for β ∈ {1.0, 0.8, 0.5, 0.0}. The figure 6(right) shows the average performance in terms of collisions. As already mentioned, even in the case β = 1, there are still some collisions but when β = 0.8 (that is 20% of the information is noise) the performance remains comparable. These behaviors were homogeneous and depended only on the statistics of the neural controller’s connectivity. They were not, moreover, dominated by either extreme regimes. Consequently, the robots was neither reactive neither automatic. As a way to assess some properties of the resulting network, we recorded the spiking activity Xt (number of hidden neurons that fire at time t) during the evolution of the robot whose trajectory is shown in figure 7. We kept a trace of this activity by setting : x ¯(t) = (1 − ω)Xt + ω x ¯(t − 1) (with ω = 0.99)

344

H. Soula and G. Beslon

Fig. 8. Distance/Spiking activity heightmap. Distance is expressed as a factor of the agent radius and the spiking activity was scaled between maximum and minimum.

indicating that the higher the value of the trace the higher the past spiking activity. We recorded also the distance to the nearest obstacle and draw the bivariate heightmap of these two variables. This map is shown in figure 8. It shows that the distance is correlated to the spiking activity. It implies that the resulting neural network shows distance selectivity in its overall activity. It is an emergent result from the learning algorithm since we do not provide the robot with the distance and there is no such selectivity for untrained networks.

6

Discussion

In this article, we proposed that competing learning rules for competing dynamics can be a powerful way to a develop neural architecture that learns a temporal task. We are aware that, at first glance, this learning paradigm seems to be ad hoc or rather over tuned. Indeed, empirical use of this Hebbian rule may not be enough to extract more than simple behaviors. Orienting the learning toward an observed behavior corresponding with our wishes is probably a much more complicated task. Nevertheless, to the best of our knowledge, this is the first scientific work where a spiking neural network learns a navigation task while the robot interacts with the environment. The results of this are homogeneous and depend only on the statistics of the network. Moreover, the collision-free navigation itself indicates non-trivial extraction via the learning process of the relevant features of the robot/environment relationship.

Acknowledgements This work was supported by the ACI DYNN. We would like to thank Carson Chow for his critical comments on the paper.

Spike-Timing Dependent Plasticity Learning

345

References 1. I. Harvey, E. Di Paolo, R. Wood, and M. Quinn. Evolutionary robotics: A new scientific tool for studying cognition. Artificial Life, 11(1-2):79–98, 2005. 2. H. Soula, G.Beslon, and J. Favrel. Evolving spiking neurons nets to control an animat. In D. Pearson, N. Steel, and R. Albrecht, editors, Proc. of ICANN-GA, pages 193–197, Roanne, France, 2003. 3. E. di Paolo. Evolving spike-timing-dependent plasticity for single-trial learning in robots. Phil. Trans. R. Soc. Lond. A., 361:2299–2319, 2003. 4. D. Floreano and C. Mattiusi. Evolution of spiking neural controllers for autonomous vision-based robots. In T. Gomi, editor, Evolutionnary Robotics, Berlin, Germany, 2001. Springer-Verlag. 5. H. C. Tuckwell. Introduction to theoretical neurobiology: Vol.2:Non linear and stochastic theories. Cambridge University Press, Cambridge, USA, 1988. 6. C. van Vreeswijk and H. Sompolinsky. Chaos in neuronal networks with balanced excitatory and inhibitory activity. Science, 274:1724–1726, 1996. 7. D.Q. Nykamp and D. Tranchina. A population density approach that facilitates large-scale modeling of neural networks: Analysis and an application to orientation tuning. Journal of Computational Neuroscience, 8:19–50, 2000. 8. C. Meyer and C. van Vreeswijk. Temporal correlations in stochastic networks of spiking neurons. Neural Computation, 14(2):369–404, 2002. 9. C. C. Chow. Phase-locking in weakly heterogeneous neural networks. Physica D, 118:343–370, 1998. 10. W. Gerstner. Population dynamics of spiking neurons: fast transients, asynchronous states and locking. Neural Computation, 12:43–89, 2000. 11. N. Brunel. Dynamics of sparsely connected networks of excitatory and inhibitory spiking neurons. Journal of Computational Neuroscience, 8:183–208, 2000. 12. D. J. Amit and N. Brunel. Model of global spontaneous activity and local structured delay activity during learning periods in the cerebral cortex. Cerebral Cortex, 7:237–252, 1997. 13. S. Fusi and M. Mattia. Collective behavior of networks with linear (vlsi) integrateand-fire neurons. Neural Computation, 11:633–652, 1999. 14. M. Mattia and P. del Giudice. Population dynamics of interacting spiking neurons. Physical Review E, 66(5), 2002. 15. H. Soula, G. Beslon, and O. Mazet. Spontaneous dynamics of assymmetric random recurrent spiking neural networks. Neural Computation, 18(1), 2006. 16. G. Bi and M. Poo. Synaptic modifications in cultured hippocampal neurons: Dependence on spike timing, synaptic strength, and postsynaptic cell type. The Journal of Neuroscience, 18(24):10464–10472, December 1998. 17. L. F. Abbott and S. B. Nelson. Synaptic plasticity: taming the beast. Nature America, 3:1178–1182, December 2000. 18. D. O. Hebb. The Organization of Behavior. Wiley, New York, USA, 1949. 19. R. P. N. Rao and T. J. Sejnowski. Spike-timing-dependent hebbian plasticity as temporal difference learning. Neural Computation, 13:2221–2237, 2001. 20. S. Song, K.D Miller, and L.F Abbott. Competitive hebbian learning through spiketiming dependent plasticity. Nature Neuroscience, 3:919–926, 2000.

Spike-Timing Dependent Plasticity Learning for Visual ...

is the network's own dynamics. The behaviors of ... Thus, when computing Vi(Tn i + 1) we set ... The network serves as a temporally stable filter between sensors.

1MB Sizes 3 Downloads 147 Views

Recommend Documents

Spike-timing-dependent plasticity of inhibitory synapses ...
the net change in synaptic strength yields the same overall effect. ...... a standard model for the inhibitory interneurons. SCs had seven currents Na. I , Kd. I , h1.

Spike timing dependent plasticity promotes synchrony ...
Institute for Nonlinear Science, University of California San Diego, CA 92093. Dong-Uk Hwang† and William Ditto‡. J Crayton Pruitt Family Department of ...

Spike timing dependent plasticity promotes synchrony of inhibitory ...
Jan 15, 2008 - We then derive an analytic expression for the evolution of spike ...... 0.57 mS/cm2) provides a unique stable solution to the. Eq. (9), and the two ...

Spike timing dependent plasticity promotes synchrony of inhibitory ...
Jan 15, 2008 - of heterogeneity H. See online publication for the color version .... BA, if. A(t, gBA) is the STRC for neuron A, we have (Fig. 5). wE(2n + 1) ...

When is Social Learning Path-Dependent?
Mar 17, 2017 - γθ (ai) . (2). In the case in which β = 1, this sampling method has ... if all types use the same learning rule, i.e., if σθ = σθ for each types θ, θ ∈ Θ.

Learning-rate-dependent clustering and self ...
Dec 30, 2009 - ing neurons are observed to “fire” spikes at regular intervals with a particular ... degree of synchronization, a global order parameter, r, is defined as rei t = 1 ..... Color online The time evolution of the two-cluster order par

Jointly Learning Data-Dependent Label and ... - Semantic Scholar
Sridhar Mahadevan. Computer Science Department ... 1 Introduction. In many ... which classes are similar to each other and how similar they are. Secondly, the ...

Learning Inter-related Visual Dictionary for Object ...
Indian Conference on Computer Vision, Graphics and Image. Processing, Dec 2008. 6 ... 1800–1807, 2005. 1. [23] J. Wright, A. Y. Yang, A. Ganesh, S. S. Sastry, ...

Learning visual context for object detection
position in the image, the data is sampled relatively to the object center ... age data being biased. For example, cars ... As a result we visualize two input images ...

BlueJ Visual Debugger for Learning the ... - ACM Digital Library
Science Education—computer science education, information systems education. General Terms: Experimentation, Human Factors. Additional Key Words and ...

When is Social Learning Path-Dependent?
Motivation 1: Social Learning. Agents must often make decisions without knowing the costs and benefits of the possible choices. A new agent may learn from the ...

Jointly Learning Data-Dependent Label and Locality ...
Jointly Learning Data-Dependent Label and Locality-Preserving Projections. Chang Wang. IBM T. J. ... Sridhar Mahadevan. Computer Science Department .... (l ≤ m), we want to compute a function f that maps xi to a new space, where fT xi ...

CONTEXT DEPENDENT WORD MODELING FOR ...
Good CDW units should be consistent and trainable. ... these two criteria to different degrees. A simple .... CDW based language models, a translation N-best list. (N=10) is .... [13] S. Chen, J. Goodman, “An Empirical Study of Smoothing Tech-.

State-Dependent or Time-Dependent Pricing: Does ... - Bank of Canada
Abstract. In the 1988-2004 micro data collected by the U.S. Bureau of Labor Statistics for the CPI, price changes are frequent (every 4-7 months, depending on the treatment of sale prices) and large in absolute value (on the order of 10%). The size a

decomposition approximations for time-dependent ...
Nov 11, 1997 - plex telephone call centers containing a network of interactive voice ... Hence, if there tend to be ample servers, a network of infinite-server ...

Page 1 Heterogeneous Domain Adaptation: Learning Visual ...
Heterogeneous Domain Adaptation: Learning Visual Classifiers from Textual. Description. Mohamed Elhoseiny. Babak Saleh. Ahmed Elgammal. Department of ...

Cross-modal plasticity for the spatial processing of ...
Sep 2, 2008 - 2007; Weaver etal. 2007) tasks as well as in more basic sensory processing. Whether these various functions are subserved by the same or segregated sets of neurons, as well as the neural mechanisms that mediate such plasticity (top-down

Cross-modal plasticity for the spatial processing of ...
Sep 2, 2008 - considered sensory cortices to be fixed or “hardwired,” with specific cortical ... whether blind individuals have perceptual advantages or disadvantages in ..... neural network that underlies auditory ability (Collignon et al. 2007)

Requirement of Synaptic Plasticity - Cell Press
Jun 3, 2015 - [email protected] (T.K.), [email protected] (A.T.). In Brief. Kitanishi et al. identify GluR1-dependent synaptic plasticity as a key cellular.

PLASTICITY: RESOURCE JUSTIFICATION AND DEVELOPMENT By ...
and Cooperating Associate Professor of Education and Human Development. John E. Donovan II, Assistant Professor of Mathematics ... In this thesis, I detail and expand upon Resource Theory, allowing it to account for the development of resources and c

Ranking with query-dependent loss for web search
Feb 4, 2010 - personal or classroom use is granted without fee provided that copies are not made or distributed ...... we will try to explore more meaningful query features and investigate their ... Overview of the okapi projects. In Journal of.

Plasticity of intermediate mechanics students ...
Nov 12, 2008 - We collect data from student interactions to build models of student cognition, ... research (Resource Theory2).3 In this section, we present an overview of .... Typically, we use a simplistic description of p-prim acti- vation to ...