Machine learning is about finding the pattern in data. When we first get the data, it is categorized as features and examples. What are those?
An example, or observation, is the particular set of attributes. For example - no pun intended - if we were trying to predict the ideal price of a product to maximize profits as a shopkeeper, each example we feed our algorithm would be a particular previous shopkeeper and all his attributes, with his ideal price for the same product. A feature, however, is a particular attribute. In the example above, that would be his neighborhood, average neighborhood income or number of competitors.
The cost function is the next step. The cost function basically measures how inaccurate our model is at predicting the output. The cost function changes depending on what machine learning algorithm you use, but the principle is still the same.
Before we get to the next and final step, we need to discuss parameters. Lots of terms in machine learning are used outside it too, but with different meanings, so be careful with that. The parameters are a set of numbers that determine the shape of the model. For example, in a linear equation , and determine the shape of the graph. Similarly, machine learning models all have different parameters, which determine how important each feature is in determining the final output.
Optimization is the last important step. This finds the best curve for a particular dataset, by minimizing the cost function by changing parameters until the cost function is minimum, finding the ideal weighting for each feature. There are a lot of different optimization approaches, and we’ll soon discuss a few. We’ll be using libraries to take care of the mathematical process of Optimization for us, but I would highly encourage you to explore more Optimization processes - they’re really cool.
Anyway, now that we have created the ideal model for the data, you can use it to predict the output value you require. Hooray!