正在加载图片...
Joint distribution over observed variables Marginalizing out the latent variables in P(X1, ...,Xn, Y1,..., Ym),we get a joint distribution over the observed variables P(X1, ..., Xn) In comparison with bayesian network without latent variables, LTM Is computationally very simple to work with Represent complex relationships among manifest variables What does the structure look like without the latent variables? Y1 X4 X1)(X2)(X3 X5(X6)X7 AAAl2014 Tutorial Nevin L Zhang HKUSTAAAI 2014 Tutorial Nevin L. Zhang HKUST 4  Marginalizing out the latent variables in , we get a joint distribution over the observed variables .  In comparison with Bayesian network without latent variables, LTM:  Is computationally very simple to work with.  Represent complex relationships among manifest variables.  What does the structure look like without the latent variables? Joint Distribution over Observed Variables
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有