-
Notifications
You must be signed in to change notification settings - Fork 315
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Extra softmax layer #6
Comments
Though each capsules norm is a probability [0-1], the capsules will be fighting within themselves to send the info to higher level capsules (based on their correlation with the output of the higher level capsules). Hence, there is a softmax layer. |
That's not how Capsules work... |
Maybe if you could write your understanding about capsules or point out the lines in the paper, it will be helpful to discuss and learn I guess. Anyway, I will let the code owner to clarify your doubts. In my understanding, more the correlation between primary capsule's output to digit capsule's output, the higher the bond between them. Hence, it's a kind of attention mechanism between primary capsules and digit capsules, which necessitates the need for a softmax (based on correlation). |
From the paper, section 4, last paragraph, you have that
(Install this extension to view LaTeX on GitHub.) So, as you can see, you're supposed to use |
Oh I see. My bad. I didn't see which softmax you are mentioning:) I think you are right. There is no need for softmax (since the vector's magnitude emulates probability). Thanks for elaborating it. By the way, I have noticed some more deviations in the implementation with respect to paper. Please check if you find time. I'm not sure if my interpretation is correct. |
|
Why is there an extra softmax layer https://github.com/gram-ai/capsule-networks/blob/master/capsule_network.py#L106?
Each capsule's norm is already modelling a probability.
The text was updated successfully, but these errors were encountered: