Artificial intelligence has become one of the most popular trends in technology today. Across industries, AI is being used for everything from automating tasks at work to interacting with other companies’ software or systems. Companies are incorporating AI into their products and services, and it seems like every day there is an new announcement about how an AI-based model performed well on something!

While using deep learning models may seem daunting at first, this article will go over all the basics of ensemble methods, then talk through some applications of them. By the end you will be able to apply what you have learned to make your own AI projects.

Deep neural networks (aka “neural nets”) are layers of mathematics that are trained on large datasets to perform specific functions. For instance, a very common neural net is called a convolutional neural network (CNN). A CNN takes input images and produces output predictions for those images.

Ensembling is the practice of combining many individual models together to produce better results than any single model would alone. This is because not only do these individual models learn their part of the task independently, but they also depend on each other to correct their mistakes.

By adding these independent models together, the overall performance can improve substantially due to the diversity of the components. In fact, ensembles are so powerful that some experts recommend using them as our default choice when creating AI programs.

Use ensemble methods

An ensemble method is any technique that includes multiple models for making predictions. The individual models are not always in agreement, but when they do agree it gives you a more confident prediction.

The most common way to use an ensemble model is via voting. Each member of the ensemble votes for one choice and then the majority wins. This is how many popular statistical software programs like Max or Google’s AI tools work.

Other types of ensembles go beyond just picking one option as the winner, though. Some weights each option according to how likely it is to be correct and selects the option with the highest score.

Ensembling different models can also help prevent overfitting. Because some parts of the system test your assumptions by using different approaches, it does not learn to check out its own notes. By incorporating these into something else, it becomes more robust against changes in the data.

Create a voting ensemble

how to ensemble deep learning models

An alternative approach to combining models is using an ensemble method. With this, instead of having one model that is trained with all the data, you have many individual models that are each trained on a smaller part of the dataset.

The parts are then put together into one final prediction. This way, even if a bad model is used in a given area of the algorithm, the overall prediction will be decent because there’s no longer just one factor making the decision.

A common example of this is when there are two different classifiers that can both classify something as either cat or dog. One may identify it as a cat while the other says it’s a dog, but we need to choose between them since they contradict each other. By taking a vote (adding up their predictions) we get a more accurate result than either one alone.

Ensembling isn’t always better

It’s important to note that simply adding models together won’t necessarily produce the best results every time. That would be like asking for a separate opinion from someone and assuming that your answer is solid!

In some cases, adding additional components could actually make things worse and lead to totally inaccurate predictions. This happens when individual models make similar mistakes, so they push each other away from the correct choice.

Combine models using a feedforward ensemble

how to ensemble deep learning models

An ensemble of model types is another way to improve accuracy. This method uses multiple models that are not directly connected with each other, but who work together in some way to produce the final result.

The most common type of ensemble is called bagging, where separate instances of the algorithm are run with randomly selected training sets. Each instance is referred to as a “bagging” or “weak classifier.” Averaging these weak predictions produces an improved prediction over any individual model.

A second classification technique is randomizing the input features or the train set. By doing this, the system cannot rely on the original structure of the data, which may be misleading when trying to predict future examples.

By combining these two methods, additional layers of protection are added to prevent overfitting. Because every member of the ensemble is trained separately, there can also be specializations within members that increase performance.

Combine models using a backwards ensemble

how to ensemble deep learning models

One of the most effective ways to improve the accuracy of any model is by combining it with another that are not necessarily more accurate, but have different concepts than the first.

A common example of this is when we have a classification model that assumes human made judgments as data and uses algorithms to determine if something is positive or negative content (e.g., a picture of a dog).

Then, you can take these algorithm results and use someone else’s judgment to confirm whether the image contains an instance of dog-related content or not!

This process is called validation via external sources and is very popular in natural language processing because there is little context about what makes an article good quality content vs. junk material.

By doing this several times, you get better estimates of how well the original classifier would work since the algorithm was trained on all internal representations of the dataset, while the second set of classes comes from people.

It also gives us an idea of how much confidence each source has in its binary judgement. This way, we can combine the two sources using their average or weighted mean to produce our final result, where some may be less confident and others more so, which helps mitigate potential biases in the data.

Combine models using a bagging ensemble

how to ensemble deep learning models

An alternative way to combine your classifiers is through what’s called an ensembling approach. This method takes many different classification algorithms and “bakes” (mixes together) all of them into one strong model!

The most well-known example of this type of combination is probably the k-Nearest Neighbors algorithm. Rather than choosing only one algorithm to test against new data, it picks several and combines their predictions in some fashion to make a final decision.

In the case of k-NN, there are two common ways to achieve this combining. The first uses all of the training instances as input features for each individual algorithm, before they are combined. The second instead concatenates or joins all of the trained models together into one longer sequence, which then becomes an additional feature used by the neural net.

Which approach you use depends on how much computational power you have available to run each model individually and how large your dataset is. If you have very little resources, going the first route can save time since you don’t need to run the other models. As your computer gets faster, moving onto the second option makes more sense.

Combine models using a randomization ensemble

how to ensemble deep learning models

An alternative approach to creating deep learning models is through an ensemble method. This is done by combining multiple individual networks or layers as different components of one overall model.

The most common way to do this is via what’s called a bagging technique. Bagging means putting together several similar parts to make up the whole, just like how we use bags of chips at a party to satisfy all your snack needs!

A second option is known as stacking, where you take two or more pre-trained networks and combine them in a custom fashion to create another network.

Both of these types of ensembling are interesting alternatives to training new networks from scratch. That said, there are some things that must be considered when choosing which type of ensemble to use.

In this article, we will discuss how to choose between bagged and stacked neural networks for object detection.

Combine models using a ensembling classifier

how to ensemble deep learning models

A second way to improve performance is by combining different model architectures or variants of the same architecture, referred to as ensemble methods. Using an ensemble method means taking many separate models and putting them together into one overall model.

The combined model then takes predictions from each individual model and calculates their average or weighted mean. The weighting factor depends on which model performed best on your data-set.

By having many small, less accurate models, we can gain strength from every single model’s weakness. For example, if one model underestimated certain features, the other more advanced models will not make that mistake!

There are several types of ensemble strategies, with some being better than others depending on what you want to achieve. It really comes down to testing and finding out which ones work for your problem domain.

Combine models using a feature combination

how to ensemble deep learning models

One of the most important concepts in deep learning is combining different networks or layers together into what’s called an ensemble model. An ensemble model uses features from one network to make predictions for data that were not used to train the first network, making the overall prediction more accurate!

Using an ensemble approach with neural networks is especially powerful because you can combine multiple small networks into one larger network, which help the final classifier achieve better accuracy.

One reason why this concept is so powerful is due to the overfitting risk that every individual layer within a large network poses. If you have several smaller networks working as separate components then they will all try hard to accurately predict their respective targets, but may end up ignoring parts of the input image because it does not look like part of the training set.

By putting these separate sub-networks together into one super network, we are able to reduce this risk by forcing each component to take inputs beyond just the layers trained before it. By doing this, the ensemble learns how each individual layer performs and only combines them at the very last stage, where it is safe to do so.

There are many ways to create ensembles, such as having some number of identical base networks and choosing which ones to use at test time, or creating your own custom architectures by stacking together different types of layers and connections.

With this article, I will be going through two easy ways to create your own neural net ensemble systems.

Caroline Shaw is a blogger and social media manager. She enjoys blogging about current events, lifehacks, and her experiences as a millennial working in New York.