Friday, April 29, 2016

The Equivalence of Logistic Regression and Maximum Entropy Modeling

The Equivalence of Logistic Regression and Maximum Entropy Modeling

http://web.engr.illinois.edu/~mqian2/upload/research/notes/The%20Equivalence%20of%20Logistic%20Regression%20and%20Maximum%20Entropy%20Modeling.pdf

B. Dual Problem The dual problem of maximum entropy modeling is an unconstrained optimization problem


The Equivalence of Logistic Regression and Maximum Entropy Modeling

Posterior probability

From Wikipedia, the free encyclopedia
In Bayesian statistics, the posterior probability of a random event or an uncertain proposition is the conditional probability that is assigned after the relevant evidence or background is taken into account. Similarly, the posterior probability distribution is the probability distribution of an unknown quantity, treated as a random variable,conditional on the evidence obtained from an experiment or survey. "Posterior", in this context, means after taking into account the relevant evidence related to the particular case being examined.

Definition[edit]

The posterior probability is the probability of the parameters \theta given the evidence Xp(\theta|X).
It contrasts with the likelihood function, which is the probability of the evidence given the parameters: p(X|\theta).
The two are related as follows:
Let us have a prior belief that the probability distribution function is p(\theta) and observations x with the likelihood p(x|\theta), then the posterior probability is defined as
p(\theta|x) = \frac{p(x|\theta)p(\theta)}{p(x)}.[1]
The posterior probability can be written in the memorable form as
\text{Posterior probability} \propto \text{Likelihood} \times \text{Prior probability}

No comments:

Post a Comment