Casey Chu
- The principle of maximum entropy
- First proposed by Jaynes (1956), the principle of maximum entropy is a method of choosing, out of a set of probability distributions, one particular distribution that purportedly best represents our state of knowledge. It works like this. Suppose we want to assign a probability distribution to a set of outcomes to describe our knowledge of the outcomes, but we aren’t sure which distribution to assign. Let’s say that we do know that certain distributions are completely ruled out, and the distributions that are allowed are given by a set . Then the principle says to choose the probability distribution that maximizes the Shannon entropy, constraining ourselves to distributions in .
- At this point, recall that the Shannon entropy of a probability distribution on outcomes is defined to be and intuitively, it measures the amount of “uncertainty” present in the distribution. For example, as we become more and more certain of a particular outcome (that is, for some ), the entropy approaches . In contrast, if we don’t know anything about the distribution (that is, ), then it can be shown that the entropy is attains its maximum value of .
- Vaguely, therefore, this principle can be thought of as choosing the distribution in that is “least certain,” which intuitively makes some sense. In fact, many named distributions that we’ve heard of can be exhibited as instances of the principle of maximum entropy. For example, the uniform distribution, exponential distribution, geometric distribution, and normal distribution can all be viewed as maximum entropy distributions for some set . I think this points to the fact that this mysterious principle of maximum entropy is fundamental in some way.
- The Wallis derivation
- In what he calls the Wallis derivation, Jaynes (2003), citing Graham Wallis, gave what seems to me to be a good justification for why the entropy is a reasonable quantity to maximize. It imagines the following random process for generating a probability distribution . First divide the total probability mass of into “chunks of probability” each with probability . Then, uniformly scatter each chunk of probability among the outcomes. The result of this process is a probability distribution given by , where is the number of chunks that landed in outcome . Eventually, the idea is to make the chunks smaller and smaller, taking .
- The probability of obtaining a particular distribution from this procedure is given by the multinomial distribution using Stirling’s approximation. To emphasize, for large , the probability becomes where is some normalization constant, and is again the entropy. This means that the most probable distribution generated by the process is the one with the maximum entropy.
- Therefore, if you believe that the process described is a fair way to generate distributions “uninformatively,” then it would make sense to use the maximum entropy distribution as the least informative. It even makes sense, in the case where we have some prior knowledge that allows us to restrict possible distributions to a set , to maximize the entropy over distributions , since this corresponds to taking the most likely distribution under this process, but rejecting distributions that are generated but do not fall in .
- There are several subtleties to this process that I can think of. First, why are we simply taking the most likely distribution , instead of considering the full distribution of possible distributions? That is, why can we take a point estimate rather than the more Bayesian approach of integrating over a distribution of distributions? The answer is that the distribution of distributions is extremely concentrated at the maximum entropy distribution. To see this, let be the maximum entropy distribution, and let have entropy . Then its probability is Therefore, as , any non-maximum entropy distribution becomes exponentially less likely than the maximum entropy distribution , no matter how small the entropy gap is. So it makes sense to consider just the maximum entropy distribution, as any other distributions have negligible probabilities anyway.
- Second, depending on the set we take (remember that represents the set of possible distributions given our prior information), there might be multiple maximum entropy distributions. In the most common cases, this doesn’t happen, but interestingly, our calculations above already prescribe what to do if it does happen. Since for two maximum entropy distributions, the resulting distribution over distributions is uniform over the maxima, and everywhere else.
- The KL divergence
- The Wallis derivation leads naturally to an interesting generalization. Suppose that we don’t scatter the probability chunks uniformly among the outcomes, but instead according to some other probability distribution . Then the probability of generating a distribution becomes or where This last quantity is the KL divergence (Kullback-Liebler divergence) from to , also known as the relative entropy. It’s intuitive to reason about because it behaves almost like a distance between the distributions and , since and if and only if .
- At this point, we arrive at a very intuitive reformulation of the principle of maximum entropy: the maximum entropy distribution is simply the distribution that minimizes its KL divergence from the uniform distribution.
- From this perspective, it’s obvious that the uniform distribution is the maximum entropy distribution when no constraints are present, since the minimum divergence distribution from the uniform distribution is obviously the uniform distribution. If we do constrain the distribution to some set , we are finding the “closest” distribution from the uniform distribution that lies in the set .
- This raises the question, then: is there anything really fundamental as choosing the uniform distribution as the reference distribution when computing the KL divergence? I think there isn’t. Instead, the principle of maximum entropy works with any “prior” distribution , and it happens that the uniform distribution is a good prior for many discrete problems.
- The uniform distribution becomes less useful for continuous problems, since it’s impossible to have a uniform distribution on all of . Indeed, the naive generalization of entropy, the differential entropy loses some of the nice properties that Shannon entropy has. Instead, it seems that the proper generalization is for some reference distribution , which is the negative of the continuous KL divergence. In the continuous setting, the reference distribution must be specified for the entropy to be well-defined, suggesting that it must be specified for the discrete case as well, and it is just that we usually take the uniform measure out of convenience.
- Bayesian inference
- The principle of maximum entropy and its formulation in terms of KL divergence prescribes a rule for going from a (often uniform) prior distribution to another distribution. Concretely, if we have a prior distribution , but we learn that the set of possible distributions is actually constrained to , then the rule prescribes that the prior distribution should be replaced with the distribution that minimizes subject to .
- Of course, this procedure is reminiscent of Bayesian inference, which also prescribes a rule for going from a prior distribution to a new distribution, in light of new, observed data. Concretely, a prior should be replaced with the posterior after observing new data . How are these two prescriptions related?
- For me, Giffin and Caticha (2007) provide one insightful answer: they show that Bayesian inference can be viewed precisely as an instance of the principle of maximum entropy. It imagines a prior joint distribution , which encodes both the prior and the likelihood . When data is observed, the constraint set is restricted to only distributions that match the observed data, i.e. those for which . As per the principle of maximum entropy (or, more precisely, minimum KL divergence), we then replace the prior joint distribution with the distribution that minimizes subject to . We will show that we can recover the traditional posterior from this maximum entropy distribution .
- The key observation is that Under the constraint that , the first term on the right-hand side becomes simply , while the second term on the right-hand side becomes a constant. Minimizing this quantity is therefore achieved by ensuring that , which can be done while satisfying the constraint by setting . Therefore, the distribution prescribed by the principle of maximum entropy after observing data is completely captured by the Bayesian posterior (and the observed data ).
- This gives an equivalence between the principle of maximum entropy and the conditioning procedure prescribed by Bayesian inference.