当前位置:高等教育资讯网  >  中国高校课件下载中心  >  大学文库  >  浏览文档

《人工智能》课程教学资源(PPT课件讲稿)Ch10 Auto-encoders(Auto and variational encoders v.9r6)

资源类别:文库,文档格式:PPTX,文档页数:80,文件大小:2.52MB,团购合买
Part 1: Overview of Vanilla (traditional) Autoencoder • Introduction • Theory • Architecture • Application • Examples Part 2: Variational autoencoder Will learn • Learn what is Variational autoencoder • How to train it? • How to use it?
点击下载完整版文档(PPTX)

Ch10 Auto-encoders KH Wong Ch10. Auto and variational encoders v.9r6

Ch10. Auto-encoders KH Wong Ch10. Auto and variational encoders v.9r6 1

Two types of autoencoders Part1: Vanilla(traditional) autoencoder or simply called autoencoder Part 2: Variational autoencoder Ch10. Auto and variational encoders v.9r6

Two types of autoencoders • Part1 : Vanilla (traditional) Autoencoder – or simply called Autoencoder • Part 2: Variational Autoencoder Ch10. Auto and variational encoders v.9r6 2

Part 1 Overview of vanilla(traditional) Autoencoder ntroduction Theory Architecture Application Examples Ch10. Auto and variational encoders v.9r6

Part 1: Overview of Vanilla (traditional) Autoencoder • Introduction • Theory • Architecture • Application • Examples Ch10. Auto and variational encoders v.9r6 3

Introduction What is auto-decoder? A unsupervised method Application For noise removal Dimensiona| reductⅰon Method Use noise- free ground truth data(e.g MNIST)+ self generative noise to train the network The final network can remove noise of input corrupted by noise(e.g. hand written characters), the output will be similar to the ground truth data Ch10. Auto and variational encoders v.9r6

Introduction • What is auto-decoder? – A unsupervised method • Application – For noise removal – Dimensional reduction • Method – Use noise-free ground truth data (e.g. MNIST)+ self generative noise to train the network – The final network can remove noise of input corrupted by noise (e.g. hand written characters), the output will be similar to the ground truth data Ch10. Auto and variational encoders v.9r6 4

Noise remova .https://www.slideshare.net/billlangiun/simple-introduction-to-autoencoder 影2 → Encoder Decoder Noisy input Compressed representation Denoised image The feature we want to extract from the image Result: plt title(original images: top rows Z乙(3 Corrupted Input: middle rows Denoised Input: third rows") p乙733 Ch10. Auto and variational encoders v gr6

Noise removal • https://www.slideshare.net/billlangjun/simple-introduction-to-autoencoder Ch10. Auto and variational encoders v.9r6 5 Result: plt.title('Original images: top rows,' 'Corrupted Input: middle rows, ' 'Denoised Input: third rows')

Auto encoder structure An autoencoder is a feedforward neural network hat learns to predict the input corrupted by noise compressed Data itself in the output Input Output The input-to-hidden part corresponds to an encoder The hidden-to-output part corresponds to a decoder. Input and output are of Erode Docomo the same dimension and encoder decoder SIze https://towardsdatascience.com/deep-autoencoders-using-tensorflow-c68f075fd1a3 Ch10. Auto and variational encoders v. 9r6

Auto encoder Structure An autoencoder is a feedforward neural network that learns to predict the input (corrupted by noise) itself in the output. 𝑦 (𝑖) = 𝑥 (𝑖) • The input-to-hidden part corresponds to an encoder • The hidden-to-output part corresponds to a decoder. • Input and output are of the same dimension and size. Ch10. Auto and variational encoders v.9r6 6 https://towardsdatascience.com/deep-autoencoders-using-tensorflow-c68f075fd1a3 Input Output encoder decoder

nput output Theory x->F->X X z=o(WX+b) Wσ W′ =o(Wz+b)--( 水 Autoencoders are trained to minimize reconstruction errors(such as squared errors) often referred to as the loss(L) x->F->X By combining )and(x) L(X,Xx)=||X×212 Xx(Wσ(WⅩ+b)+b)|2 Ch10. Auto and variational encoders v.9r6

Theory • x->F->x’ • z=(Wx+b)-----------(*) • x’=’(W’z+b’) -------(**) • Autoencoders are trained to minimize reconstruction errors (such as squared errors), often referred to as the "loss (L)": • By combining (*) and (**) • L(x,x’)=||x-x’||2 • =||x-’(W’ (Wx+b)+b’)||2 Ch10. Auto and variational encoders v.9r6 7  ’ x->F->x’ W W’

Exercise 1 ° How many input hidden layers, output Input Output layers for the figure imprese vara shown? How many neurons in these layers? What is the relation between the number rode of input and output neurons ? Ch10. Auto and variational encoders v.9r6 8

Exercise 1 • How many input, hidden layers, output layers for the figure shown? • How many neurons in these layers? • What is the relation between the number of input and output neurons? Ch10. Auto and variational encoders v.9r6 8 Input Output

Answer 1 Input Output How many input hidden ccaresse Dara layers, output layers for the figure shown? Answer: 1 input, 3 hidden, 1 output layers How many neurons in these layers? Answer: input(4) Enode Decode hidden(3, 2, 3), output (4 What is the relation between the number of input and output neurons? Answer: same Ch10. Auto and variational encoders v.9r6

Answer 1 • How many input, hidden layers, output layers for the figure shown? – Answer:1 input, 3 hidden,1 output layers • How many neurons in these layers? – Answer: input(4), hidden(3,2,3), output (4) • What is the relation between the number of input and output neurons? – Answer: same Ch10. Auto and variational encoders v.9r6 9 Input Output

Architecture Encoder and decoder Original Information aInIng can use Autoencoder typical Encoder backpropagation methods Compressed Information Decoder https://towardsdatascience.com/how-to reduce- image-noises-by-autoencoder- Restored Information 65d5e6de543 Ch10. Auto and variational encoders v. 9r6

Architecture • Encoder and decoder • Training can use typical backpropagation methods Ch10. Auto and variational encoders v.9r6 10 https://towardsdatascience.com/how-to￾reduce-image-noises-by-autoencoder- 65d5e6de543

点击下载完整版文档(PPTX)VIP每日下载上限内不扣除下载券和下载次数;
按次数下载不扣除下载券;
24小时内重复下载只扣除一次;
顺序:VIP每日次数-->可用次数-->下载券;
共80页,可试读20页,点击继续阅读 ↓↓
相关文档

关于我们|帮助中心|下载说明|相关软件|意见反馈|联系我们

Copyright © 2008-现在 cucdc.com 高等教育资讯网 版权所有