Sponsored Links
-->

Tuesday, November 28, 2017

Tree Pruning & Scalability and Decision Tree Induction 1 - YouTube
src: i.ytimg.com

Pruning is a technique in machine learning that reduces the size of decision trees by removing sections of the tree that provide little power to classify instances. Pruning reduces the complexity of the final classifier, and hence improves predictive accuracy by the reduction of overfitting.


Video Pruning (decision trees)



Introduction

One of the questions that arises in a decision tree algorithm is the optimal size of the final tree. A tree that is too large risks overfitting the training data and poorly generalizing to new samples. A small tree might not capture important structural information about the sample space. However, it is hard to tell when a tree algorithm should stop because it is impossible to tell if the addition of a single extra node will dramatically decrease error. This problem is known as the horizon effect. A common strategy is to grow the tree until each node contains a small number of instances then use pruning to remove nodes that do not provide additional information.

Pruning should reduce the size of a learning tree without reducing predictive accuracy as measured by a cross-validation set. There are many techniques for tree pruning that differ in the measurement that is used to optimize performance.


Maps Pruning (decision trees)



Techniques

Pruning can occur in a top down or bottom up fashion. A top down pruning will traverse nodes and trim subtrees starting at the root, while a bottom up pruning will start at the leaf nodes. Below are several popular pruning algorithms.

Reduced error pruning

One of the simplest forms of pruning is reduced error pruning. Starting at the leaves, each node is replaced with its most popular class. If the prediction accuracy is not affected then the change is kept. While somewhat naive, reduced error pruning has the advantage of simplicity and speed.

Cost complexity pruning

Cost complexity pruning generates a series of trees T 0 ... T m {\displaystyle T_{0}\dots T_{m}} where T 0 {\displaystyle T_{0}} is the initial tree and T m {\displaystyle T_{m}} is the root alone. At step i {\displaystyle i} , the tree is created by removing a subtree from tree i - 1 {\displaystyle i-1} and replacing it with a leaf node with value chosen as in the tree building algorithm. The subtree that is removed is chosen as follows. Define the error rate of tree T {\displaystyle T} over data set S {\displaystyle S} as err ( T , S ) {\displaystyle \operatorname {err} (T,S)} . The subtree that minimizes err ( prune ( T , t ) , S ) - err ( T , S ) | leaves ( T ) | - | leaves ( prune ( T , t ) ) | {\displaystyle {\frac {\operatorname {err} (\operatorname {prune} (T,t),S)-\operatorname {err} (T,S)}{\left\vert \operatorname {leaves} (T)\right\vert -\left\vert \operatorname {leaves} (\operatorname {prune} (T,t))\right\vert }}} is chosen for removal. The function prune ( T , t ) {\displaystyle \operatorname {prune} (T,t)} defines the tree gotten by pruning the subtrees t {\displaystyle t} from the tree T {\displaystyle T} . Once the series of trees has been created, the best tree is chosen by generalized accuracy as measured by a training set or cross-validation.


Knapsack Problem with Branch and Bound Pruning
src: faculty.chas.uni.edu


See also

  • Alpha-beta pruning
  • Artificial neural network
  • Null-move heuristic

How does the ID3 algorithm works in Decision Trees | Sagarnil Das ...
src: media.licdn.com


References

  • Judea Pearl, Heuristics, Addison-Wesley, 1984
  • Pessimistic Decision tree pruning based on Tree size

Decision Tree Learning - ppt download
src: slideplayer.com


Further reading

  • MDL based decision tree pruning
  • Decision tree pruning using backpropagation neural networks

More Data Mining with Weka (3.1: Decision trees and rules) - YouTube
src: i.ytimg.com


External links

  • Fast, Bottom-Up Decision Tree Pruning Algorithm
  • Introduction to Decision tree pruning

Source of article : Wikipedia