PPt6 - Hopfield

Chia sẻ: Pham Thanh Hai | Ngày: | Loại File: PPT | Số trang:53

0
55
lượt xem
22
download

PPt6 - Hopfield

Mô tả tài liệu
  Download Vui lòng tải xuống để xem tài liệu đầy đủ

Recurrent Network có các hidden neuron: ph n t làm tầ ử rễ z-1 được dùng Đầu ra của Neural được feedback về tất cả các Neural. Recurrent Neural Network (RNN) Input: Pattern (thường có nhiều hoặc xuống cấp) Output: Corresponding pattern (hoàn hảo/xét môṭ cách tương đôí la ̀ ko có nhiễu )

Chủ đề:
Lưu

Nội dung Text: PPt6 - Hopfield

  1. Recurrent network Recurrent Network có các hidden neuron: phần tử làm trễ z-1 được dùng - Đầu ra của Neural được feedback về tất cả các Neural z-1 input z-1 hidden output z-1 Faculty of Electronics and Telecommunications, HUT 1 Bangkok, Jun. 14 – 23, 2006
  2. Recurrent Neural Network (RNN) Input: Pattern (thương co nhiêu hoăc xuông câp) ̀ ́ ̃ ̣ ́ ́ Output: Corresponding pattern (hoan hảo/xét môt cach tương đôi   ̀ ̣ ́ ́ la ko co nhiêu) ̀ ́ ̃ Process – Nap môt mâu lên nhom loi (core) cac neurons đươc kêt  ̣ ̣ ̃ ́ ̃ ́ ̣ ́ nôi phưc tap. ́ ́ ̣ – Chay core neurons cho tơi khi chung tiên đên môt trang  ̣ ́ ́ ́ ́ ̣ ̣ thai ôn đinh. ́ ̉ ̣ – Đoc đâu ra cua trang thai cua cac neuron loi. ̣ ̀ ̉ ̣ ́ ̉ ́ ̃ Inputs Outputs Output: (1 -1 1 -1 -1) Input: (1 0 1 -1 -1) 2 Faculty of Electronics and Telecommunications, HUT 2 Bangkok, Jun. 14 – 23, 2006
  3. Associative­Memory Networks Input: Pattern (thương co nhiêu hoăc xuông câp) ̀ ́ ̃ ̣ ́ ́ Output: Corresponding pattern (hoan hảo/xét môt cach tương đôi   ̀ ̣ ́ ́ la ko co nhiêu) ̀ ́ ̃ Process – Nap môt mâu lên nhom loi (core) cac neurons đươc kêt  ̣ ̣ ̃ ́ ̃ ́ ̣ ́ nôi phưc tap. ́ ́ ̣ – Chay core neurons cho tơi khi chung tiên đên môt trang  ̣ ́ ́ ́ ́ ̣ ̣ thai ôn đinh. ́ ̉ ̣ – Đoc đâu ra cua trang thai cua cac neuron loi. ̣ ̀ ̉ ̣ ́ ̉ ́ ̃ Inputs Outputs Output: (1 -1 1 -1 -1) Input: (1 0 1 -1 -1) 3 Faculty of Electronics and Telecommunications, HUT 3 Bangkok, Jun. 14 – 23, 2006
  4. Cac loai Associative Network ́ ̣ 1. Auto-associative: X = Y ̣ ̣ ́ ̃ ̉ ̃ *Nhân dang cac nhiêu cua mâu 2. Hetero-associative Bidirectional: X Y BAM = Bidirectional Associative Memory *Iterative correction of input and output NN 5 Faculty of Electronics and Telecommunications, HUT 4 4 Bangkok, Jun. 14 – 23, 2006
  5. Cac loai Associative Network (cont.) ́ ̣ 3. Hetero-associative Input Correcting: X Y *Input clique is auto-associative => repairs input patterns 4. Hetero-associative Output Correcting: X Y NN 5 Faculty of Electronics and Telecommunications, HUT 5 *Output Bangkok, Jun. 14 – 23, 2006 clique is auto-associative => repairs output patterns 5
  6. Hebb’s Rule Connection Weights ~ Correlations ``When one cell repeatedly assists in firing another, the axon of the first cell develops synaptic knobs (or enlarges them if they already exist) in contact with the soma of the second cell.” (Hebb, 1949) In an associative neural net, if we compare two pattern components (e.g. pixels) within many patterns and find that they are frequently in: a) the same state, then the arc weight between their NN nodes should be positive b) different states, then ” ” ” ” negative Matrix Memory: The weights must store the average correlations between all pattern components across all patterns. A net presented with a partial pattern can then use the correlations to recreate the entire pattern. NN 5 Faculty of Electronics and Telecommunications, HUT 6 6 Bangkok, Jun. 14 – 23, 2006
  7. Correlated Field Components • Each component is a small portion of the pattern field (e.g. a pixel). • In the associative neural network, each node represents one field component. • For every pair of components, their values are compared in each of several patterns. • Set weight on arc between the NN nodes for the 2 components ~ avg correlation. a ?? a ?? b b Avg Correlation wab a b NN 5 Faculty of Electronics and Telecommunications, HUT 7 7 Bangkok, Jun. 14 – 23, 2006
  8. Quantifying Hebb’s Rule Compare two nodes to calc a weight change that reflects the state correlation: * When the two components are the same Auto-Association: ∆w jk ∝ i pk i pj (different), increase (decrease) the weight i = input component Hetero-Association: ∆w jk ∝ i pk o pj o = output component Ideally, the weights will record the average correlations across all patterns: P P Auto: w jk ∝ ∑ i pk i pj Hetero: w jk ∝ ∑ i pk o pj p =1 p =1 Hebbian Principle: If all the input patterns are known prior to retrieval time, then init weights as: 1 P 1 P Auto: w jk ≡ ∑ i pk i pj Hetero: w jk ≡ ∑ i pk o pj P p =1 P p =1 NN 5 Faculty of Electronics and Telecommunications, HUT 8 Weights = Average Correlations Bangkok, Jun. 14 – 23, 2006 8
  9. Hopfield Networks  Auto­Association Network  Fully­connected (clique) with symmetric weights   State of node = f(inputs)  Weight values based on Hebbian principle  Performance: Must iterate a bit to converge on a pattern, but  generally much less computation than in back­propagation networks. Input Output (after many iterations) n Discrete node update rule: x pk (t + 1) = sgn(∑ wkj x pj (t ) ) j =1 NN 5 Faculty of Electronics and Telecommunications, HUT 9 9 Bangkok, Jun. 14 – 23, 2006
  10. Hopfield Networks  The Hopfield network implements a so­called associative  (also called content addressable) memory.   A collection of patterns called fundamental memories is  stored in the NN by means of weights.  Each neuron represents an attribute (dimension, feature) of  the input.  The weight of the link between two neurons measures the  correlation between the two corresponding attributes over  the fundamental memories. If the weight is high then the  corresponding attributes are often equal over the  fundamental memories.   John Hopfield (Physicist, at Caltech in 1982, proposed the  model, now at Princeton) NN 5 Faculty of Electronics and Telecommunications, HUT 10 10 Bangkok, Jun. 14 – 23, 2006
  11. Example: image retrieval   Corrupted image Intermediate state of Output Hopfield net • States Bit maps. • Attractors Prototype patterns. • Input An arbitrary pattern (e.g. picture with noise). • Output The best prototype for that NN 5 Faculty of Electronics and Telecommunications, HUT 11 pattern. Bangkok, Jun. 14 – 23, 2006 11
  12. Hopfield NN architecture: recurrent  X1 Multiple-loop feedback system with no self-feedback weight 1 Example of Hopfield NN for +1 3 dimensional input data -1 +1 -1 -1 X2 2 3 X3 -1 Attribute 3 of input (x1,x2,x3) neuron Execution: Input pattern attributes are initial states of neurons Repeatedly update state of neurons asynchronously until states do not change NN 5 Faculty of Electronics and Telecommunications, HUT 12 12 Bangkok, Jun. 14 – 23, 2006
  13. Discrete Hopfield NN    Input vectors values are in {­1,1} (or {0,1}).  The number of neurons is equal to the input dimension.  Every neuron has a link from every other neuron (recurrent  architecture) except itself (no self­feedback).  The neuron state at time n is its output value.  The network state at time n is the vector of neurons states.   The activation function used to update a neuron state is the  sign function except if the input of the activation function is 0  then the new output (state) of the neuron is equal to the old  one.  Weights are symmetric: wij =w ji NN 5 Faculty of Electronics and Telecommunications, HUT 13 13 Bangkok, Jun. 14 – 23, 2006
  14. How do we compute the weights?    N: input dimension.  M: number of patterns (called fundamental memories) are  used to compute the weights. µ              i­th component of the        fundamental memory. f µi              State of neuron i at time n. xi (n )             NN 5 Faculty of Electronics and Telecommunications, HUT 14 14 Bangkok, Jun. 14 – 23, 2006
  15. Training  1. Storage. Let f1, f2, … , fM denote a known set of N- dimensional fundamental memories. The weights of the network are: 1 M  M ∑ f µ ,i f µ , j j≠i w ji =  µ =1 0 j=i  where wji is the weight from neuron i to neuron j. The elements of the vector fμ are in {-1,+1}. Once they are computed, the synaptic weights are kept fixed. NN 5 Faculty of Electronics and Telecommunications, HUT 15 15 Bangkok, Jun. 14 – 23, 2006
  16. Rationale   Each stored pattern (fundamental memory) establishes a  correlation between pairs of neurons: neurons tend to be of  the same sign or opposite sign according to their  value in  the pattern.  If wij is large, this expresses an expectation that neurons i  and j are positively correlated. If it is small (negative) this  indicates a negative correlation.     ∑ w xx i, j ij i j    will thus be large for a state x equal to a  fundamental memory (since wij will be positive if the  product xi xj > 0 and negative if xi xj 
  17.  Execution 2. Initialisation. Let x probe denote an input vector (probe) presented to the network. The algorithm is initialised by setting: x j (0) = x probe, j j = 1, ... , N where xj(0) is the state of neuron j at time n = 0, and x probe,j is the jth element of the probe vector x probe. 3. Iteration Until Convergence. Update the elements of the network state x(n) asynchronously (i.e. randomly and one at the time) according to the rule N  x j ( n + 1) = sign ∑ w ji xi ( n ) j = 1, 2, ... , N  i =1  Repeat the iteration until the state x remains unchanged. 4. Outputting. Let xfixed denote the fixed point or stable state, that is such that x(n+1)=x(n), computed at the end of step 3. The resulting output y of the network is: y=x fixed NN 5 Faculty of Electronics and Telecommunications, HUT 17 17 Bangkok, Jun. 14 – 23, 2006
  18. Execution: Pictorially 1. Auto-Associative Patterns to Remember 3. Retrieval Comp/Node value legend: 1 2 1 2 1 2 dark (blue) with x => +1 dark (red) w/o x => -1 3 4 3 4 3 4 light (green) => 0 2. Distributed Storage of All Patterns: 1 2 3 4 -1 1 2 1 1 2 3 4 3 4 • 1 node per pattern unit • Fully connected: clique 1 2 • Weights = avg correlations across all patterns of the corresponding units NN 5 Faculty of Electronics and Telecommunications, HUT 3 4 18 18 Bangkok, Jun. 14 – 23, 2006
  19. Example of Execution  (-1, 1, 1) (1, 1, 1) weight 1 (1, 1, -1) +1 -1 (-1, 1, -1) +1 -1 -1 2 3 (-1, -1, 1) -1 (1, -1, 1) neuron (-1, -1, -1) (1, -1, -1) stable states -1 1 -1 -1 -1 -1 1 1 -1 -1 1 1 attraction basin 1 attraction basin 2 1 -1 -1 -1 -1 1 1 1 1 1 -1 1 NN 5 Faculty of Electronics and Telecommunications, HUT 19 19 Bangkok, Jun. 14 – 23, 2006
  20. Example of Training   Consider the two fundamental memories (­1 ­1 ­1) ( 1  1  1).   Find weights of Hopfield network w = (1*1 + (-1)*(-1))/2=1 1 Network behaviour: -1 -1 -1 2 3 -1 -1 1 -1 1 -1 1 -1 -1 -1 1 1 1 -1 1 1 1 -1 1 1 1 NN 5 Faculty of Electronics and Telecommunications, HUT 20 20 Bangkok, Jun. 14 – 23, 2006

CÓ THỂ BẠN MUỐN DOWNLOAD

Đồng bộ tài khoản