The Softmax Activation Function Explained
In this short tutorial, we'll explore the Softmax activation function, including its use in classification tasks, and how it relates to cross entropy loss.
Created on January 18|Last edited on July 29
Comment
In this article, we'll dig into the Softmax activation function, including its use in classification tasks, and how it relates to cross entropy loss.
Here's what we'll be covering:
Table Of Contents
What Is The Softmax Function Used For?Why Use Softmax Instead Of The Max Or Argmax Activation Functions?The Softmax Activation Function ExpressedSoftmax + Cross-Entropy Loss (Caution: Math Alert)Conclusion
What Is The Softmax Function Used For?
One of the most common tasks in ML is Classification, which means that given an input (image, video, text, or audio), can a model return which class it belongs to? If we use the simplest form of neural network out there, say, a multilayer perceptron, how do we convert the output to class?
NOTE: Remember that MLP is nothing but a weighted sum of inputs i.e. , a scalar value.
💡
Essentially, we need some way to transform this number into something that can give us a notion of which class the input belongs to. This is where Activation functions come in (Softmax being one of the most commonly used activation functions).
For example, say we take the most common (and inarguably the most important) Image Classification problem: Hot Dog or Not Hot Dog 🌭. Given an image of a food, our task is to classify the image as "Hot Dog" or "Not Hot Dog". In essence, our task is Binary Classification. If we assign say 1 to "Hot Dog" and 0 to "Not Hot Dog", then our model should output something between 0 and 1 and based on some threshold, we can assign classes appropriately.
But what if we have a Multi-Class Classification problem? 0 and 1 won't do that.
Enter Softmax.
Why Use Softmax Instead Of The Max Or Argmax Activation Functions?
You might be asking yourself why we should use Softmax instead of just using the maximum or argmax functions. Let's dig in.
First, consider using the max function i.e. a function that returns the largest value from a give sequence of inputs. So, if we have an input like then the output would look like . All the other values are just returned as zeros. The argmax is a slightly different variant of this where the function returns the index of the largest value rather than the entire list.
Softmax is a softer version of the max function (who would've guessed!). Instead of returning a binary sequence with 1 for max and 0 otherwise, what if we want probability values instead of just zeros for the non-max inputs. As you can imagine for multi-class classification 0s and 1s don't really help. What we rather want is a distribution of values. This is where softmax comes in.
The Softmax Activation Function Expressed
The Softmax Activation Function can be mathematically expressed as :-
This function outputs a sequence of probability values, thus making it useful for multi-class classification problems. For example, for a 5-class classification problem, the output from the Softmax Function might look something like this:
As you can see, the sum is , and the interpretation would be that assuming the classes have been one-hot encoded, the 4th class (or 3rd index) is the most probable, with the 5th and 3rd closely after.

Illustration of how One-Hot Encoding would work for a sentence. Source: SauravMaheshkar/infographics
Softmax + Cross-Entropy Loss (Caution: Math Alert)
Using our definition from the above section say represent probabilities output from the network and represent the unnormalized log probabilities, represent the softmax outputs i.e. , then using cross-entropy loss as :-
Now let's throw everything together
Conclusion
And that wraps up our short tutorial on the Softmax Activation Function. If you have any questions or comments, please feel free to add them below.
To see the full suite of Weights & Biases features, please check out this short 5-minute guide. If you want more reports covering the math and "from-scratch" code implementations, let us know in the comments down below or on our forum ✨!
Check out these other reports on Fully Connected covering other fundamental concepts like Linear Regression, Cross Entropy Loss, and Decision Trees.
An Introduction to Linear Regression For Machine Learning (With Examples)
In this article, we provide an overview of, and a tutorial on, linear regression using scikit-learn, with code and interactive visualizations so you can follow.
Decision Trees: A Guide with Examples
A tutorial covering Decision Trees, complete with code and interactive visualizations
What Is Cross Entropy Loss? A Tutorial With Code
A tutorial covering Cross Entropy Loss, with code samples to implement the cross entropy loss function in PyTorch and Tensorflow with interactive visualizations.
Introduction to Cross Validation Techniques
A tutorial covering Cross Validation techniques, complete with code and interactive visualizations.
Introduction to K-Means Clustering (With Examples)
A tutorial covering K-Means Clustering, complete with code and interactive visualizations.
A Gentle Introduction To Weight Initialization for Neural Networks
An explainer and comprehensive overview of various strategies for neural network weight initialization
Add a comment
Softmax is a softer version of the max function.
Damn, I didn't see that coming.
Reply
Iterate on AI agents and models faster. Try Weights & Biases today.