Trees can be useful analogies for many things. They can be analogous to the wild growth of a subject or a metaphor for primordial nature. But in our course, they stand for something perhaps more mundane - but also more useful, in some sense.
We make thousands of decisions everyday - which food to buy, what clothes to wear, who to talk to and what to say, how to work, what work to do, and even what to think about now. In fact, this is such an essential part of our lives, that an increasingly prominent branch of mathematics, Decision Theory, was established to study it. But no matter what kind of decision you make, no matter how complicated, all decisions are based on factors.
Often, we make a particular choice only once in a long time, but when we are making the same choice again and again, it becomes handy to create an efficient decision-making process. If you think about it, Classification in Machine Learning is just such a process. Over, and over, the algorithm is tested on whether an image is a cat or dog, or if a tree is diseased or not. The problem doesn’t change. The key to an efficient solution is forming a decision-making process that highlights the most important factors, after understanding what effect they have on the outcome.
This is just what a Decision Tree does. In this module, we will try to understand how this is done, and how you can use it for Machine Learning.