In a Hidden Markov Model of a tagging problem - given a sequence of words xi and the predicted labels yi, we model the joint distribution p(x,y) as:
p(x1 ... xn, y1 ... yn+1) = Πi=1..N+1 q(yi | yi−2, yi−1) Πi=1..Ne(xi | yi)
Assuming y0 = y-1 = *
Write down the independence assumptions which correspond to this equation.
In a Hidden Markov Model of a tagging problem - given a sequence of words xi and the predicted labels yi, we model the joint distribution p(x,y) as:
p(x1 ... xn, y1 ... yn+1) = Πi=1..N+1 q(yi | yi−2, yi−1) Πi=1..Ne(xi | yi)
Assuming y0 = y-1 = *
Write down the independence assumptions which correspond to this equation.
* השאלה נוספה בתאריך: 01-02-2020