There’s a well-understood phenomenon in machine learning called overfitting. The idea is best shown by a graph:
Let me explain. The vertical axis represents the error of a hypothesis. The horizontal axis represents the complexity of the hypothesis. The blue curve represents the error of a machine learning algorithm’s output on its training data, and the red curve represents the generalization of that hypothesis to the real world (or, in the case of PAC-learning, the full target distribution). The overfitting phenomenon is marker in the middle of the graph, before which the training error and generalization error both go down, but after which the training error continues to fall while the generalization error rises.
The explanation is a sort of numerical version of Occam’s Razor that says more complex hypotheses can model a fixed data set better and better, but at some point a simpler hypothesis better models the underlying phenomenon that generates the data. To optimize a particular learning algorithm, one wants to set parameters of their model to hit the minimum of the red curve.
This is where things get juicy. Boosting, which we covered in gruesome detail previously, has a natural measure of complexity represented by the number of rounds you run the algorithm for. Each round adds one additional “weak learner” weighted vote. So running for a thousand rounds gives a vote of a thousand weak learners. Despite this, boosting doesn’t overfit on many datasets. In fact, and this is a shocking fact, researchers observed that Boosting would hit zero training error, they kept running it for more rounds, and the generalization error kept going down! It seemed like the complexity could grow arbitrarily without penalty.
Schapire, Freund, Bartlett, and Lee proposed a theoretical explanation for this based on the notion of a margin. Remember that the standard AdaBoost algorithm produces a set of weak hypotheses and a corresponding weight
for each round
. The classifier at the end is a weighted majority vote of all the weak learners (roughly: weak learners with high error on “hard” data points get less weight).
Definition: The signed confidence of a labeled example is the weighted sum:
The margin of is the quantity
. The notation implicitly depends on the outputs of the AdaBoost algorithm via “conf.”
We use the product of the label and the confidence for the observation that if and only if the classifier is incorrect. The theorem we’ll prove in this post is
Theorem: With high probability over a random choice of training data, for any generalization error of boosting is bounded from above by
In words, the generalization error of the boosting hypothesis is bounded by the distribution of margins observed on the training data. To state and prove the theorem more generally we have to return to the details of PAC-learning. Here and in the rest of this post, denotes
, the probability over a random example drawn from the distribution
, and
denotes the probability over a random (training) set of examples drawn from
.
Theorem: Let be a set of
random examples chosen from the distribution
generating the data. Assume the weak learner corresponds to a finite hypothesis space
of size
, and let
. Then with probability at least
(over the choice of
), every weighted-majority vote function
satisfies the following generalization bound for every
.
In other words, this phenomenon is a fact about voting schemes, not boosting in particular. From now on, a “majority vote” function will mean to take the sign of a sum of the form
, where
and
. This is the “convex hull” of the set of weak learners
. If
is infinite (in our proof it will be finite, but we’ll state a generalization afterward), then only finitely many of the
in the sum may be nonzero.
To prove the theorem, we’ll start by defining a class of functions corresponding to “unweighted majority votes with duplicates:”
Definition: Let be the set of functions
of the form
where
and the
may contain duplicates (some of the
may be equal to some other of the
).
Now every majority vote function can be written as a weighted sum of
with weights
(I’m using
instead of
to distinguish arbitrary weights from those weights arising from Boosting). So any such
defines a natural distribution over
where you draw function
with probability
. I’ll call this distribution
. If we draw from this distribution
times and take an unweighted sum, we’ll get a function
. Call the random process (distribution) generating functions in this way
. In diagram form, the logic goes
weights
distribution over
function in
by drawing
times according to
.
The main fact about the relationship between and
is that each is completely determined by the other. Obviously
is determined by
because we defined it that way, but
is also completely determined by
as follows:
Proving the equality is an exercise for the reader.
Proof of Theorem. First we’ll split the probability into two pieces, and then bound each piece.
First a probability reminder. If we have two events and
(in what’s below, this will be
and
, we can split up
into
(where
is the opposite of
). This is called the law of total probability. Moreover, because
and because these quantities are all at most 1, it’s true that
(the conditional probability) and that
.
Back to the proof. Notice that for any and any
, we can write
as a sum:
Now I’ll loosen the first term by removing the second event (that only makes the whole probability bigger) and loosen the second term by relaxing it to a conditional:
Now because the inequality is true for every , it’s also true if we take an expectation of the RHS over any distribution we choose. We’ll choose the distribution
to get
And (term 1) is
And is
We can rewrite the probabilities using expectations because (1) the variables being drawn in the distributions are independent, and (2) the probability of an event is the expectation of the indicator function of the event.
Now we’ll bound the terms separately. We’ll start with
.
Fix and look at the quantity inside the expectation of
.
This should intuitively be very small for the following reason. We’re sampling according to a distribution whose expectation is
, and we know that
. Of course
is unlikely to be large.
Mathematically we can prove this by transforming the thing inside the probability to a form suitable for the Chernoff bound. Saying is the same as saying
, i.e. that some random variable which is a sum of independent random variables (the
) deviates from its expectation by at least
. Since the
‘s are all
and constant inside the expectation, they can be removed from the absolute value to get
The Chernoff bound allows us to bound this by an exponential in the number of random variables in the sum, i.e. . It turns out the bound is
.
Now recall
For , we don’t want to bound it absolutely like we did for
, because there is nothing stopping the classifier
from being a bad classifier and having lots of error. Rather, we want to bound it in terms of the probability that
. We’ll do this in two steps. In step 1, we’ll go from
of the
‘s to
of the
‘s.
Step 1: For any fixed , if we take a sample
of size
, then consider the event in which the sample probability deviates from the true distribution by some value
, i.e. the event
The claim is this happens with probability at most . This is again the Chernoff bound in disguise, because the expected value of
is
, and the probability over
is an average of random variables (it’s a slightly different form of the Chernoff bound; see this post for more). From now on we’ll drop the
when writing
.
The bound above holds true for any fixed , but we want a bound over all
and
. To do that we use the union bound. Note that there are only
possible choices for a nonnegative
because
is a sum of
values each of which is either
. And there are only
possibilities for
. So the union bound says the above event will occur with probability at most
.
If we want the event to occur with probability at most , we can judiciously pick
And since the bound holds in general, we can take expectation with respect to and nothing changes. This means that for any
, our chosen
ensures that the following is true with probability at least
:
Now for step 2, we bound the probability that on a sample to the probability that
on a sample.
Step 2: The first claim is that
What we did was break up the LHS into two “and”s, when and
(this was still an equality). Then we loosened the first term to
since that is only more likely than both
and
. Then we loosened the second term again using the fact that a probability of an “and” is bounded by the conditional probability.
Now we have the probability of bounded by the probability that
plus some stuff. We just need to bound the “plus some stuff” absolutely and then we’ll be done. The argument is the same as our previous use of the Chernoff bound: we assume
, and yet
. So the deviation of
from its expectation is large, and the probability that happens is exponentially small in the amount of deviation. The bound you get is
And again we use the union bound to ensure the failure of this bound for any will be very small. Specifically, if we want the total failure probability to be at most
, then we need to pick some
‘s so that
. Choosing
works.
Putting everything together, we get that with probability at least for every
and every
, this bound on the failure probability of
:
This claim is true for every , so we can pick
that minimizes it. Doing a little bit of behind-the-scenes calculus that is left as an exercise to the reader, a tight choice of
is
. And this gives the statement of the theorem.
We proved this for finite hypothesis classes, and if you know what VC-dimension is, you’ll know that it’s a central tool for reasoning about the complexity of infinite hypothesis classes. An analogous theorem can be proved in terms of the VC dimension. In that case, calling the VC-dimension of the weak learner’s output hypothesis class, the bound is
How can we interpret these bounds with so many parameters floating around? That’s where asymptotic notation comes in handy. If we fix and
, then the big-O part of the theorem simplifies to
, which is easier to think about since
goes to zero very fast.
Now the theorem we just proved was about any weighted majority function. The question still remains: why is AdaBoost good? That follows from another theorem, which we’ll state and leave as an exercise (it essentially follows by unwrapping the definition of the AdaBoost algorithm from last time).
Theorem: Suppose that during AdaBoost the weak learners produce hypotheses with training errors . Then for any
,
Let’s interpret this for some concrete numbers. Say that and
is any fixed value less than
. In this case the term inside product becomes
and the whole bound tends exponentially quickly to zero in the number of rounds
. On the other hand, if we raise
to about 1/3, then in order to maintain the LHS tending to zero we would need
which is about 20% error.
If you’re interested in learning more about Boosting, there is an excellent book by Freund and Schapire (the inventors of boosting) called Boosting: Foundations and Algorithms. There they include a tighter analysis based on the idea of Rademacher complexity. The bound I presented in this post is nice because the proof doesn’t require any machinery past basic probability, but if you want to reach the cutting edge of knowledge about boosting you need to invest in the technical stuff.
Until next time!