正在加载图片...
2.DYNAMIC INFERENCE PROBLEM 2.1.Markov random fields.An instance of Markov random field (MRF)is specified by a tuple I (V,E,)where G =(V,E)is an undirected simple graph;O is a domain of g =|Ol spin states,for some finite q>1;and =(a)aevuE associates each vEV a vertex potential o:O-R and each eE E an edge potential e:Q2-R,where de is symmetric. A configuration aov maps each vertex vV to a spin state in Q.so that each vertex can be interpreted as a variable.And the Hamiltonian of a configuration oEQ is defined as: H(o)±∑(o,)+∑pe(oo vEV e=(u,v}EE This defines the Gibbs distribution r,which is a probability distribution over such that 1 oeQ,r(o)=exp(H()》 where the normalizing factor Zov exp(H())is called the partition function. The Gibbs measure u(o)can be 0 as the functions e can take the value -oo.A configuration o is called feasible if u(o)>0.To trivialize the problem of constructing a feasible configuration,we further assume the following natural condition for the MRF instances considered in this paper:1 (1) VvEV,Vo eOG(): ∑exp(c)+∑pu(oucl >0 ceO where IG(v)uV I fu,v}EE}denotes the neighborhood of v in graph G=(V,E). Some well studied typical MRFs indclude: .Ising model:The domain of each spin is O={-1,+1).Each edge eE is associated with a temperature Be R;and each vertex v E V is associated with a local field hoER.For each configuration o∈{-l,+1}',μr(o)cexp(∑fu,UYEE BeGuGu+∑vev huOu): Hardcore model:The domain isO=(0,1).Each configurationo OV indicates an independent set inG=(V,E),and ur(o)o alloll,where>0 is the fugacity parameter. proper g-coloring:uniform distribution over all proper q-colorings of G =(V,E). 2.2.Probabilistic inference and sampling.In graphical models,the task of probabilistic inference is to derive the probabilities regarding one or more random variables of the model.Abstractly,this is described by a function RK that maps each graphical model IE to a target K-dimensional probability vector,where i is the class of graphical models containing the random variables we are interested in and the K-dimensional vector describes the probabilities we want to derive.Given 0() and an MRF instance Ie the inference problem asks to estimate the probability vector e(). Here are some fundamental inference problems [WJ08]for MRF instances.Let I =(V,E,O,)be a MRF instance and A.B C V two disjoint sets where A B C V. Marginal inference:estimate the marginal distribution uA.r()of the variables in A,where VoA∈Q,HAI(oA)±∑HI(GA.T). t∈QA Posterior inference:given any TBB,estimate the posterior distribution A.r(rB)for the variables in A,where oA∈Q1,Ar(oA|tB)兰AUB,IoA,B) ∥B,I(tB) IThis condition guarantees that the marginal probabilities are always well-defined,and the problem of constructing a feasible configuration where r()>0,is trivial.The condition holds for all MRFs with soft constraints,or with hard constraints where there is a permissive spin,e.g.the hardcore model For MRFs with truly repulsive hard constraints such as proper g-coloring,the condition may translate to the condition g A +1 where A is the maximum degree of graph G, which is necessary for the irreducibility of local Markov chains for q-colorings. 32. Dynamic inference problem 2.1. Markov random fields. An instance of Markov random field (MRF) is specified by a tuple I = (V, E,Q, Φ), where G = (V, E) is an undirected simple graph; Q is a domain of q = |Q| spin states, for some finite q > 1; and Φ = (φa)a ∈V ∪E associates each v ∈ V a vertex potential φv : Q → R and each e ∈ E an edge potential φe : Q 2 → R, where φe is symmetric. A configuration σ ∈ Q V maps each vertex v ∈ V to a spin state in Q, so that each vertex can be interpreted as a variable. And the Hamiltonian of a configuration σ ∈ Q V is defined as: H(σ) ≜ Õ v ∈V φv (σv ) + Õ e={u,v }∈E φe (σu, σv ). This defines the Gibbs distribution µI, which is a probability distribution over Q V such that ∀σ ∈ Q V , µI(σ) = 1 Z exp(H(σ)), where the normalizing factor Z ≜ Í σ ∈QV exp(H(σ)) is called the partition function. The Gibbs measure µ(σ) can be 0 as the functions φv, φe can take the value −∞. A configuration σ is called feasible if µ(σ) > 0. To trivialize the problem of constructing a feasible configuration, we further assume the following natural condition for the MRF instances considered in this paper:1 ∀v ∈ V, ∀σ ∈ Q ΓG (v) : Õ c ∈Q exp φv (c) + Õ u ∈Γv φuv (σu,c) ! (1) > 0. where ΓG (v) ≜ {u ∈ V | {u,v} ∈ E} denotes the neighborhood of v in graph G = (V, E). Some well studied typical MRFs indclude: • Ising model: The domain of each spin is Q = {−1, +1}. Each edge e ∈ E is associated with a temperature βe ∈ R; and each vertex v ∈ V is associated with a local field hv ∈ R. For each configuration σ ∈ {−1, +1} V , µI(σ) ∝ exp ￾Í {u,v }∈E βeσuσv + Í v ∈V hvσv  . • Hardcore model: The domain isQ = {0, 1}. Each configuration σ ∈ Q V indicates an independent set in G = (V, E), and µI(σ) ∝ λ kσ k , where λ > 0 is the fugacity parameter. • proper q-coloring: uniform distribution over all proper q-colorings of G = (V, E). 2.2. Probabilistic inference and sampling. In graphical models, the task of probabilistic inference is to derive the probabilities regarding one or more random variables of the model. Abstractly, this is described by a function θ : M → R K that maps each graphical model I ∈ M to a target K-dimensional probability vector, where M is the class of graphical models containing the random variables we are interested in and the K-dimensional vector describes the probabilities we want to derive. Given θ(·) and an MRF instance I ∈ M, the inference problem asks to estimate the probability vector θ(I). Here are some fundamental inference problems [WJ08] for MRF instances. Let I = (V, E,Q, Φ) be a MRF instance and A, B ⊆ V two disjoint sets where A ] B ⊆ V . • Marginal inference: estimate the marginal distribution µA,I(·) of the variables in A, where ∀σA ∈ Q A , µA,I(σA) ≜ Õ τ ∈QV \A µI(σA, τ ). • Posterior inference: given any τB ∈ Q B , estimate the posterior distribution µA,I(· | τB) for the variables in A, where ∀σA ∈ Q A , µA,I(σA | τB) ≜ µA∪B,I(σA, τB) µB,I(τB) . 1This condition guarantees that the marginal probabilities are always well-defined, and the problem of constructing a feasible configuration σ, where µI(σ) > 0, is trivial. The condition holds for all MRFs with soft constraints, or with hard constraints where there is a permissive spin, e.g. the hardcore model. For MRFs with truly repulsive hard constraints such as proper q-coloring, the condition may translate to the condition q ≥ ∆ + 1 where ∆ is the maximum degree of graph G, which is necessary for the irreducibility of local Markov chains for q-colorings. 3
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有