Notification of results of submission acceptance. From this date onwards, participants may submit their full texts. All accepted submissions will be published in a proceeding book. The best texts will be compiled in a eBook.
|Published (Last):||22 January 2005|
|PDF File Size:||14.10 Mb|
|ePub File Size:||17.31 Mb|
|Price:||Free* [*Free Regsitration Required]|
PubMed Central. Collectively, they shape both the strengths and spatial arrangements of convergent afferent inputs to neuronal dendrites. Recent experimental and theoretical studies support a clustered plasticity model, a view that synaptic plasticity promotes the formation of clusters or hotspots of synapses sharing similar properties.
We have previously shown that spike timing-dependent plasticity STDP can lead to synaptic efficacies being arranged into spatially segregated clusters. This effectively partitions the dendritic tree into a tessellated imprint which we have called a dendritic mosaic.
We show that cluster formation and extend depend on several factors, including the balance between potentiation and depression, the afferents' mean firing rate and crucially on the dendritic morphology. We find that STDP balance has an important role to play for this emergent mode of spatial organization since any imbalances lead to severe degradation- and in some case even destruction- of the mosaic.
Our model suggests that, over a broad range of of STDP parameters, synaptic plasticity shapes the spatial arrangement of synapses, favoring the formation of clustered efficacy engrams.
Spike-timing-dependent plasticity STDP is a set of Hebbian learning rules firmly based on biological evidence. It has been demonstrated that one of the STDP learning rules is suited for learning spatiotemporal patterns. When multiple neurons are organized in a simple competitive spiking neural network, this network is capable of learning multiple distinct patterns. If patterns overlap significantly i. This letter presents a simple neural network that combines vertical inhibition and Euclidean distance-dependent synaptic strength factor.
This approach helps to solve the problem of pattern size-dependent parameter optimality and significantly reduces the probability of a neuron's forgetting an already learned pattern. For demonstration purposes, the network was trained for the first ten letters of the Braille alphabet.
Synchrony detection and amplification by silicon neurons with STDP synapses. Spike-timing dependent synaptic plasticity STDP is a form of plasticity driven by precise spike-timing differences between presynaptic and postsynaptic spikes. Thus, the learning rules underlying STDP are suitable for learning neuronal temporal phenomena such as spike-timing synchrony. It is well known that weight-independent STDP creates unstable learning processes resulting in balanced bimodal weight distributions.
In this paper, we present a neuromorphic analog very large scale integration VLSI circuit that contains a feedforward network of silicon neurons with STDP synapses. The learning rule implemented can be tuned to have a moderate level of weight dependence.
This helps stabilise the learning process and still generates binary weight distributions. From on-chip learning experiments we show that the chip can detect and amplify hierarchical spike-timing synchrony structures embedded in noisy spike trains. The weight distributions of the network emerging from learning are bimodal.
Energy-efficient neuron, synapse and STDP integrated circuits. Ultra-low energy biologically-inspired neuron and synapse integrated circuits are presented. The synapse includes a spike timing dependent plasticity STDP learning rule circuit.
These circuits have been designed, fabricated and tested using a 90 nm CMOS process. Experimental measurements demonstrate proper operation. The neuron and the synapse with STDP circuits have an energy consumption of around 0. A forecast-based STDP rule suitable for neuromorphic implementation. Artificial neural networks increasingly involve spiking dynamics to permit greater computational efficiency. This becomes especially attractive for on-chip implementation using dedicated neuromorphic hardware.
However, both spiking neural networks and neuromorphic hardware have historically found difficulties in implementing efficient, effective learning rules. The best-known spiking neural network learning paradigm is Spike Timing Dependent Plasticity STDP which adjusts the strength of a connection in response to the time difference between the pre- and post-synaptic spikes.
Approaches that relate learning features to the membrane potential of the post-synaptic neuron have emerged as possible alternatives to the more common STDP rule, with various implementations and approximations. Here we use a new type of neuromorphic hardware, SpiNNaker, which represents the flexible "neuromimetic" architecture, to demonstrate a new approach to this problem. We show that on the basis of the membrane potential it is possible to make a statistical prediction of the time needed by the neuron to reach the threshold, and therefore the LTP part of the STDP algorithm can be triggered when the neuron receives a spike.
In our system these approximations allow efficient memory access, reducing the overall computational time and the memory bandwidth required. The improvements here presented are significant for real-time applications such as the ones for which the SpiNNaker system has been designed.
We present simulation results that show the efficacy of this algorithm using one or more input patterns repeated over the whole time of the simulation. On-chip results show that. Excitatory and inhibitory STDP jointly tune feedforward neural circuits to selectively propagate correlated spiking activity.
Spike-timing-dependent plasticity STDP has been well established between excitatory neurons and several computational functions have been proposed in various neural systems. Here, we demonstrate by analytical and numerical methods that i STDP contributes crucially to the balance of excitatory and inhibitory weights for the selection of a specific signaling pathway among other pathways in a feedforward circuit.
This pathway selection is based on the high sensitivity of STDP to correlations in spike times, which complements a recent proposal for the role of i STDP in firing-rate based selection. Our model predicts that asymmetric anti-Hebbian i STDP exceeds asymmetric Hebbian i STDP for supporting pathway-specific balance, which we show is useful for propagating transient neuronal responses.
Furthermore, we demonstrate how STDPs at excitatory—excitatory, excitatory—inhibitory, and inhibitory—excitatory synapses cooperate to improve the pathway selection. We propose that i STDP is crucial for shaping the network structure that achieves efficient processing of synchronous spikes. We present a novel strategy for unsupervised feature learning in image applications inspired by the Spike-Timing-Dependent-Plasticity STDP biological learning rule.
We show equivalence between rank order coding Leaky-Integrate-and-Fire neurons and ReLU artificial neurons when applied to non-temporal data. We apply this to images using rank-order coding, which allows us to perform a full network simulation with a single feed-forward pass using GPU hardware.
Next we introduce a binary STDP learning rule compatible with training on batches of images. Two mechanisms to stabilize the training are also presented : a Winner-Takes-All WTA framework which selects the most relevant patches to learn from along the spatial dimensions, and a simple feature-wise normalization as homeostatic process.
This learning process allows us to train multi-layer architectures of convolutional sparse features. We finally compare these results with several other state of the art unsupervised learning methods.
Stability versus neuronal specialization for STDP : long-tail weight distributions solve the dilemma. Spike-timing-dependent plasticity STDP modifies the weight or strength of synaptic connections between neurons and is considered to be crucial for generating network structure. It has been observed in physiology that, in addition to spike timing, the weight update also depends on the current value of the weight. The functional implications of this feature are still largely unclear.
Additive STDP gives rise to strong competition among synapses, but due to the absence of weight dependence, it requires hard boundaries to secure the stability of weight dynamics. Multiplicative STDP with linear weight dependence for depression ensures stability, but it lacks sufficiently strong competition required to obtain a clear synaptic specialization. A solution to this stability-versus-function dilemma can be found with an intermediate parametrization between additive and multiplicative STDP.
Here we propose a novel solution to the dilemma, named log- STDP , whose key feature is a sublinear weight dependence for depression. Due to its specific weight dependence, this new model can produce significantly broad weight distributions with no hard upper bound, similar to those recently observed in experiments. Log- STDP induces graded competition between synapses, such that synapses receiving stronger input correlations are pushed further in the tail of very large weights.
Strong weights are functionally important to enhance the neuronal response to synchronous spike volleys. Depending on the input configuration, multiple groups of correlated synaptic inputs exhibit either winner-share-all or winner-take-all behavior. When the configuration of input correlations changes, individual synapses quickly and robustly readapt to represent the new configuration. We also demonstrate the advantages of log- STDP for generating a stable structure of strong weights in a recurrently connected network.
These properties of log- STDP are compared with those of previous models. Through long-tail weight. STDP allows fast rate-modulated coding with Poisson-like spike trains.
Spike timing-dependent plasticity STDP has been shown to enable single neurons to detect repeatedly presented spatiotemporal spike patterns. This holds even when such patterns are embedded in equally dense random spiking activity, that is, in the absence of external reference times such as a stimulus onset.
Here we demonstrate, both analytically and numerically, that STDP can also learn repeating rate-modulated patterns, which have received more experimental evidence, for example, through post-stimulus time histograms PSTHs. Each input spike train is generated from a rate function using a stochastic sampling mechanism, chosen to be an inhomogeneous Poisson process here.
Repeated pattern presentations induce spike-time correlations that are captured by STDP. Despite imprecise input spike times and even variable spike counts, a single trained neuron robustly detects the pattern just a few milliseconds after its presentation. Therefore, temporal imprecision and Poisson-like firing variability are not an obstacle to fast temporal coding. STDP provides an appealing mechanism to learn such rate patterns, which, beyond sensory processing, may also be involved in many cognitive tasks.
On the applicability of STDP -based learning mechanisms to spiking neuron network models. The ways to creating practically effective method for spiking neuron networks learning, that would be appropriate for implementing in neuromorphic hardware and at the same time based on the biologically plausible plasticity rules, namely, on STDP , are discussed.
The influence of the amount of correlation between input and output spike trains on the learnability by different STDP rules is evaluated. A usability of alternative combined learning schemes, involving artificial and spiking neuron models is demonstrated on the iris benchmark task and on the practical task of gender recognition.
STDP -based spiking deep convolutional neural networks for object recognition. Previous studies have shown that spike-timing-dependent plasticity STDP can be used in spiking neural networks SNN to extract visual features of low or intermediate complexity in an unsupervised manner.
These studies, however, used relatively shallow architectures, and only one layer was trainable. Another line of research has demonstrated - using rate-based neural networks trained with back-propagation - that having many layers increases the recognition robustness, an approach known as deep learning.
We used a temporal coding scheme where the most strongly activated neurons fire first, and less activated neurons fire later or not at all. The network was exposed to natural images. Thanks to STDP , neurons progressively learned features corresponding to prototypical patterns that were both salient and frequent. Only a few tens of examples per category were required and no label was needed.
After learning, the complexity of the extracted features increased along the hierarchy, from edge detectors in the first layer to object prototypes in the last layer. Coding was very sparse, with only a few thousands spikes per image, and in some cases the object category could be reasonably well inferred from the activity of a single higher-order neuron.
More generally, the activity of a few hundreds of such neurons contained robust category information, as demonstrated using a classifier on Caltech , ETH, and MNIST databases. Taken together, our results suggest that the combination of STDP with latency coding may be a key to understanding the way that the primate visual system learns, its remarkable processing speed and its low energy consumption. These mechanisms are also interesting for artificial vision systems, particularly for hardware.
Emergence of small-world structure in networks of spiking neurons through STDP plasticity.
Desarrollo Afectivo y Social de Félix López Editorial Pirámide.pdf