Saturday, January 10, 2009

F.A.E.W. Reviews: C4.5


For your convenience and mine, today I will review the popular C4.5 classification algorithm. I'll proceed in three sections: general history and description of the algorithm, issues associated with implementation, and, finally, its relative strengths and weaknesses. As always, references are hyperlinked to their respective sources.

[1] A General History and Overview of C4.5
According to its wikipedia article, C4.5 was created by machine learning theorist Ross Quinlan (pictured above). In his 1993 book, "C4.5: Programs for Machine Learning," C4.5 was inspired by a graduate course he attended by Donald Michie, then at Stanford University. He states, "[I was given] a challenging assignment--to learn a complete and correct rule for deciding whether one side in a simple chess endgame was lost two-ply." The system he developed eventually evolved into the ID3 algorithm (Iterative Dichotomiser 3), which would, in turn, evolve into C4.5.

C4.5--along with ID3, Concept Learning System (CLS), and Categorization and Regression Tree (CART)--all belong to a group of data mining algorithms which construct classifiers expressed as decision trees. Briefly, a decision tree is a way of breaking a large, heterogeneous data structure into many smaller, more homogenous groupings. Thus, each "leaf" on the tree represents a classification grouping, while each "branch" represents a feature conjunction which leads to that classification.

[2] Implementation of C4.5
Wu et al. (2008)
has a reasonably succinct description of the C4.5 workflow. By their account, C4.5 grows an initial tree of S cases (made up of samples si, each of which constitutes a feature vector si_x, where x is a vector with each element k corresponding to a numeric representation of the kth feature) using the divide-and-conquer algorithm in the following steps:
  1. If all the cases in S belong to the same class, or if S is small, label the tree leaf with the most prevalent class in S.
  2. If the above conditions are not met, choose a test based on a single attribute represented in S, and which has two or more outcomes.
  3. The test chosen in step 2 will comprise the root of the tree, with each outcome assigned a separate branch from the root.
  4. Apply steps 2 and 3 recursively to each subset.
With a large S, there tends to be a great many possible tests which could be used to create partitions, leading to one of the primary implementation decisions that a user must make: what is the criteria by which to choose the best possible test? An intuitive first-pass at this question leads to the conclusion that the best test will, at any point, create the largest subsets of maximally homogenous Si, but how to quantify this? Most people turn to information entropy, which is calculated

where b denotes the base of the logarithm, which is generally 2, I(X) denotes the information content of random variable X, and p denotes its probability mass function. In words, H(X) is a measure of the uncertainty associated with the random variable X before we have selected a particular value for it, as well as from the information produced from the set of X if we assign a specific value to X (Li & Vitanyi, 1997). Li & Vitanyi continue,
By choosing a particular message a from S, we remove the entropy from X by the assignment X:=a and produce of transmit information I = log D(S) by our selection of a.
Within the context of C4.5, information entropy is used to determing the best test to perform at a given leaf in the decision tree (i.e., the one which minimizes the total entropy of the subsets Si necessarily leads to selection of tests having numerous outcomes; Wu et al., 2008).
The next decision one must make when implementing the C4.5 algorithm deals with how best to "prune" the a decision tree which has been constructed. The act of pruning a decision tree gets at a more general problem in the field of mathematical modeling--the avoidance of overfitted models. In general, when someone uses a data set to create a classification model, it is with the intention of applying it to new, previously unseen data. There is uncertainty associated with the picture of the feature space we created with our classification tree--we assume that our data was selected from a larger population of interest, and that the feature values represented in the set we've sampled is but a subset of the possible feature values. It is important that we account for this uncertainty, not assuming that our picture is a precise representation of the data population as a whole.

"Pruning," to a certain extent, helps extend a tree to new data. A variety of pruning techniques exist (see Esposito et al., 1997, for an interesting comparative analysis); the method emplyed in Quinlan's out-of-the-box method is called "Error-Based Pruning" (EBP). Beginning at the bottom of the tree, the EBP algorithm visits each node. According to certain decision criteria, EBP will simplify decision trees by either removing branches, or by grafting on new ones. The Esposito paper includes a helpful illustration. Sau, for example, that we have an unpruned tree T that we wish to simplify:

We can simplify T to T' by pruning at node 1:

We can simplify further by grafting the subtree we removed in T-T' in place of node 1:

To make these prune/graft/do nothing decisions, C4.5 estimates the posterior probability of misclassification with various prune/graft/do nothing operations on the constructed tree. To do this, it makes couple assumptions:
  1. Errors in the training set follow a binomial distribution.
  2. The set of examples contained in a leaf t is a statistical sample from which we can estimate confidence intervals for our error rate.
[3] Strengths and Weaknesses
As with most classification algorithms, before deciding to use C4.5, it's important to reflect on whether the structure of your data lends itself to representation within its framework. For example, classifying data with respect to a categorical dependent variable, when you have an associated set of several categorical independent variables, is a situation in which C4.5 can be intuitively applied. That said, decision trees are certainly able to work with continuous data, but in these situations the user will need to decide the best way to bin the data into categories (e.g., low/medium/high, etc.). One of the ways in which the C4.5 algorithm improved upon ID3 was that it created thresholding schemes for making these binning decisions on its own.

In my experience, one of the primary reasons people will use the C4.5 algorithm--or decision trees in general--is that it generates a set of easy-to-understand rules which ultimately led to its classification scheme. This strength has won it a special place in the hearts of medical practitioners (e.g., Porcel et al., 2008; Huang et al., 2008) and their infromaticians, as doctors often have an inherent distrust for so-called "black box" classification schemes (for a very interesting discussion on this, see Rolfe, 1997). But this does not come without a price--decision trees can be computationally expensive to train. When the tree is being built, at each node the algorithm must sort each candidate split in terms of the decision criteria, before the best one can be found.

The pruning algorithm faces similar computational costs, as each candidate sub-tree must be formed and compared. More problematic--at least from a theory point of view--are the assumptions made in the pruning step. The likelihood that the errors made in on our training data are binomially distributed is fairly slim, calling into question our confidence interval estimates. Worse still is the idea that node t is a random sample from the population of all possible nodes, since the tree constructed is not itself randomly selected from the population of all possible trees--we intentionally built it according to the characteristics of our data.

Despite these limitations, C4.5 belongs to one of the most widely-used and easy-to-understand categories of data mining algorithms. If you'd like to try it out, it's free, so give it a shot.

No comments: