Minutes of GD on Memristors

24 views
Skip to first unread message

krishmehta97

unread,
Jul 30, 2016, 1:50:38 PM7/30/16
to Electronics Club IITB
What is a Memristor?
A circuit element which takes resistance values determined by the current that has flown through it till now, taking into account the direction also. As you can imagine it is a combination of "memory"(previous current/charge) and "resistance".
Why are we looking for a memristor ?
Consider the fundamental electric quantities: I, V and Q. This accounts for electrical quantities, but Flux must be considered to account for magnetic properties also. So, we see an element missing which was predicted to be a memristor.


Characteristics of a Memristor:
Since all of R,L,C follow linear relations between the corresponding quantities, we first check with Flux=M.q but that gives a behaviour just like a resistance with V-I analogous to Flux-Q.
Consider Flux=Mq^2. Using standard relations, shows how it kind of "retains" voltage values. It reflects how much charge is stored till now.
First manufacture of Memristor:
HP Labs, in April 2008 made a working model of memristors using TiO2 and TiO(2-x) slabs in the nanoscale reproducing the properties of a Memristor. As the voltage is applied Oxygen vacancies in TiO(2-x) are pushed towards/away from TiO2 slab depending upon the direction. Ti0(2-x) length changes in proportion to the voltage and length of Ti02 changes accordingly. Applying the conditions, of V = IR , R = pL(variable)/A and dL/dt = I , we solve to realise Flux = M.Q^2.
The uniqueness of its behavior is due to:
1. Movement of Oxygen vacancies into/away from TiO2 area depending upon the bias given.
2. Linear variation of resistance with length of Ti-oxide areas.
3. Oxide vacancies staying at the position where they were, when voltage is turned off.
Let us see some of its exciting applications!
Learning in Neural Networks:
Learning is encoded into "synaptic plasticity" of synapses. Plasticity here, is the retention of what is the "weightage" of current from a neuron. LEARNING comes from the fact that weights are assigned such that the spikes are transferred to specific paths, and spikes at nearby neurons are co-related with each other to establish relations. If 2 adjacent neurons receive 2 spikes t time apart, then this "t" is very important in deciding the weights. If two spikes are close enough then it means that they must be relevant to each other. This leads to increase in weight for transfer of this spike from one neuron to the other.
Neural Network Hardware:
This is what we adopt to mimic our brain's learning.
We model input and output spikes along wires and synapses by Memristors. It is essentially a mesh of crossbar latches(You can look it up if enthu). Spikes are given at inputs and output spike is received. Superimpose the spikes and co-relate delta t and delta R(of memristor). WEIGHT adjustment is like constantly feeding inputs and outputs for adjusting weights such that when finally you give that set of inputs you can get a spike for that output.
Non-Volatile Memories:
Using the property of resistance retention, we can make memories that are in the exact same state when they were switched off. To read the stored value, we apply small AC current to the memristor unit so that its value finally remains the same and we can read the resistance that was there. If it is above a threshold, then we take is as 1 and below that, is 0.
Reply all
Reply to author
Forward
0 new messages