当前位置:高等教育资讯网  >  中国高校课件下载中心  >  大学文库  >  浏览文档

《Artificial Intelligence:A Modern Approach》教学资源(讲义,英文版)chapter20b

资源类别:文库,文档格式:PDF,文档页数:21,文件大小:382.59KB,团购合买
点击下载完整版文档(PDF)

NEURAL NETWORKS CHAPTER 20,SECTION 5 Chapter 20.Section 5 1

Neural networks Chapter 20, Section 5 Chapter 20, Section 5 1

Outline ◇Brains ◇ Neural networks ◇Perceptrons Multilayer perceptrons Applications of neural networks Chapter 20,Section 5 2

Outline ♦ Brains ♦ Neural networks ♦ Perceptrons ♦ Multilayer perceptrons ♦ Applications of neural networks Chapter 20, Section 5 2

Brains 1011 neurons of >20 types,1014 synapses,1ms-10ms cycle time Signals are noisy "spike trains"of electrical potential Axonal arborization Q Axon from another cell Synapse Dendrite Axon Nucleus Synapses Cell body or Soma Chapter 20.Section 5 3

Brains 1011 neurons of > 20 types, 1014 synapses, 1ms–10ms cycle time Signals are noisy “spike trains” of electrical potential Axon Cell body or Soma Nucleus Dendrite Synapses Axonal arborization Axon from another cell Synapse Chapter 20, Section 5 3

McCulloch-Pitts "unit" Output is a "squashed"linear function of the inputs: ai-g(in)=g(②jW.a) Bias Weight a0=-1 Wo.i ai=g(inj) 形a In, a Input Input Output Links Activation Output Function Function Links A gross oversimplification of real neurons,but its purpose is to develop understanding of what networks of simple units can do Chapter 20,Section 5 4

McCulloch–Pitts “unit” Output is a “squashed” linear function of the inputs: ai ← g(ini) = g ΣjWj,iaj Output Σ Input Links Activation Function Input Function Output Links a0 = −1 ai = g(ini) ai g W ini j,i W0,i Bias Weight aj A gross oversimplification of real neurons, but its purpose is to develop understanding of what networks of simple units can do Chapter 20, Section 5 4

Activation functions g(in) g(ini) +1 ini (a) (b) (a)is a step function or threshold function (b)is a sigmoid function 1/(1+e-*) Changing the bias weight Wo.:moves the threshold location Chapter 20.Section 55

Activation functions (a) (b) +1 +1 ini ini g(ini g(in ) i) (a) is a step function or threshold function (b) is a sigmoid function 1/(1 + e−x) Changing the bias weight W0,i moves the threshold location Chapter 20, Section 5 5

Implementing logical functions Wo=1.5 Wo=0.5 W0=-0.5 = W1=-1 W,=1 W2=1 AND OR NOT McCulloch and Pitts:every Boolean function can be implemented Chapter 20,Section 5 6

Implementing logical functions AND W0 = 1.5 W1 = 1 W2 = 1 OR W2 = 1 W1 = 1 W0 = 0.5 NOT W1 = –1 W0 = – 0.5 McCulloch and Pitts: every Boolean function can be implemented Chapter 20, Section 5 6

Network structures Feed-forward networks: single-layer perceptrons multi-layer perceptrons Feed-forward networks implement functions,have no internal state Recurrent networks: -Hopfield networks have symmetric weights(Wij-Wj) g(x)=sign(x),ai=+1;holographic associative memory Boltzmann machines use stochastic activation functions, ≈MCMC in Bayes nets recurrent neural nets have directed cycles with delays have internal state (like flip-flops),can oscillate etc. Chapter 20.Section 5 7

Network structures Feed-forward networks: – single-layer perceptrons – multi-layer perceptrons Feed-forward networks implement functions, have no internal state Recurrent networks: – Hopfield networks have symmetric weights (Wi,j = Wj,i) g(x) = sign(x), ai = ± 1; holographic associative memory – Boltzmann machines use stochastic activation functions, ≈ MCMC in Bayes nets – recurrent neural nets have directed cycles with delays ⇒ have internal state (like flip-flops), can oscillate etc. Chapter 20, Section 5 7

Feed-forward example 3 W* W35 5 W23 W 4 W45 Feed-forward network=a parameterized family of nonlinear functions: a5=g(W3,5·a3+W4,5·a4) =g(W3,5·g(W,3·a1+W2,3·a2)+W4,5·g(W,4·a1+W2,4·a2) Adjusting weights changes the function:do learning this way! Chapter 20,Section 5 8

Feed-forward example W1,3 W1,4 W2,3 W2,4 W3,5 W4,5 1 2 3 4 5 Feed-forward network = a parameterized family of nonlinear functions: a5 = g(W3,5 · a3 + W4,5 · a4) = g(W3,5 · g(W1,3 · a1 + W2,3 · a2) + W4,5 · g(W1,4 · a1 + W2,4 · a2)) Adjusting weights changes the function: do learning this way! Chapter 20, Section 5 8

Single-layer perceptrons Perceptron output 0.8 0.6 0.4 0.2 0 、0 Input Output Units Units Output units all operate separately-no shared weights Adjusting weights moves the location,orientation,and steepness of cliff Chapter 20.Section 5 9

Single-layer perceptrons Input Units Units Output Wj,i -4 -2 0 2 x1 4 -4 -2 0 2 4 x2 0 0.2 0.4 0.6 0.8 1 Perceptron output Output units all operate separately—no shared weights Adjusting weights moves the location, orientation, and steepness of cliff Chapter 20, Section 5 9

Expressiveness of perceptrons Consider a perceptron with g=step function(Rosenblatt,1957,1960) Can represent AND,OR,NOT,majority,etc.,but not XOR Represents a linear separator in input space: ∑,Wz)>0orW·x>0 0( 00 0 0 (a)xi and x2 (b)x1 orx2 (c)x1 xor x2 Minsky Papert (1969)pricked the neural network balloon Chapter 20,Section 5 10

Expressiveness of perceptrons Consider a perceptron with g = step function (Rosenblatt, 1957, 1960) Can represent AND, OR, NOT, majority, etc., but not XOR Represents a linear separator in input space: ΣjWjxj > 0 or W · x > 0 (a) x1 and x2 1 0 0 1 x1 x2 (b) x1 or x2 0 1 1 0 x1 x2 (c) x1 xor x2 ? 0 1 1 0 x1 x2 Minsky & Papert (1969) pricked the neural network balloon Chapter 20, Section 5 10

点击下载完整版文档(PDF)VIP每日下载上限内不扣除下载券和下载次数;
按次数下载不扣除下载券;
注册用户24小时内重复下载只扣除一次;
顺序:VIP每日次数-->可用次数-->下载券;
共21页,试读已结束,阅读完整版请下载
相关文档

关于我们|帮助中心|下载说明|相关软件|意见反馈|联系我们

Copyright © 2008-现在 cucdc.com 高等教育资讯网 版权所有