3️⃣

5.3 Entropy and Information Gain

Each node has what’s called an Entropy. This indicates how random the data that passes through the particular node is. For example, a particular node would have zero entropy, if conducting that test makes the output class 100% clear, without any further testing. However, if data yielding a particular value for a the test (think “Rectangular Nose” for a Nose Shape Test), is still just about as likely to fit into any of the classes (could be a cat or a dog), then the Entropy for that node is 1, which is the maximum. Naturally, as you can see, Entropy acts like our cost function with Decision Trees. By minimizing the entropy, we gain node purity, and prediction accuracy. Great!
There’s just one more important concept to define.
Information Gain
Information Gain refers to the improvement in Entropy after splitting the data by a particular test. By splitting the data in the most efficient way, we can end up with a neat decision-making process to produce quick and accurate predictions.
Now, we’re ready to discuss the algorithm, so let’s jump into it!

Previous Section

2️⃣
5.2 Concepts
 
⚖️
Copyright © 2021 Code 4 Tomorrow. All rights reserved. The code in this course is licensed under the MIT License. If you would like to use content from any of our courses, you must obtain our explicit written permission and provide credit. Please contact classes@code4tomorrow.org for inquiries.