What is Ensemble Learning Video Lecture Transcript This transcript was automatically generated, so there may be discrepancies between the video and the text. Hi, everybody. Welcome back in this video. We'll start learning about ensemble learning by talking about what exactly it is. Uh Let me go ahead and share my Jupiter notebook, which is stored in the ensemble learning folder of the supervised learning lectures. So ensemble learning is not a new type of supervised learning problem or a new type of supervised learning. Um you know, general algorithm. It's really more of an approach to solving previously looked at supervised learning problems like regression and classification. So at this point, we've looked at unique algorithms for solving each of those types of problems uh like linear regression or can neighbors. And the idea behind ensemble learning is we want to use all of these models uh in some sort of handful or ensemble simultaneously to make one model that maybe performs better than any of its individual component pieces. So sometimes this involves using vastly different models. So for example, we'll we'll combine a cane neighbors with logistic aggression with support vector machines with decision trees. Or sometimes this will involve just making many of the same model with slight perturbations that are random. So for example, this is uh maybe 100 decision trees will make up what's some uh something known as a random forest. So, ensemble learning uh in this vein, can be used to solve regression problems as well as classification problems. And the idea we're leveraging here is something called the wisdom of the crowd. So if you're unfamiliar with this concept, the idea is uh maybe you want to know the answer to some question. Uh And so a nice example uh that a lot of people uh give is um tell me the weight of this cow, I wanna know how much a cow weighs. And so instead of you making a single guess, which may be very wrong or just asking a single person what they think you survey thousands or millions or just hundreds of people depending on the thing you're trying to solve. And then to get your final answer, you aggregate all of the individual uh answers you've collected. So in a lot of cases, it turns out the aggregated answer which is sort of an average of lots of people is more correct than any single expert's answer. So someone who is maybe an expert on cows maybe is uh outperformed by this average of thousands of laymen. Uh So a fun illustration of this idea talking about the wing of cows is this nice N pr dot org uh story. So this is the idea of ensemble methods is we want to build a number of different algorithms uh get their predictions on a data set and then average these predictions some way into a a wiser prediction. So in this section, we're going to cover a few different ways to do this including random forests, bagging and pasting, and then boosting, which includes adaptive boosting and gradient boosting. I believe I also missed uh voter models as well. So let me add that in. ... OK. So this is a nice introduction to the idea of ensemble learning in the next video that you watch. Hopefully we will be um covering an actual ensemble learning example. So I hope you learned uh you learned what ensemble learning is all about and you're excited to learn how to actually put it together. Uh I enjoyed having you watch this video and I hope you have a great rest of your day. Bye.