Skip to content

Commit

Permalink
Updated readme
Browse files Browse the repository at this point in the history
  • Loading branch information
tnoe1 committed Jul 7, 2022
1 parent f149e84 commit abaf196
Showing 1 changed file with 14 additions and 8 deletions.
22 changes: 14 additions & 8 deletions readme.md
Original file line number Diff line number Diff line change
@@ -1,16 +1,22 @@
# Large Margin Loss
# Thomas Noel's Master's Degree Project

A Pytorch implementation of `Large Margin Deep Networks for Classification`
Outlier exposure (OE) has been shown to be an effective method to improve anomaly detection performance at test time [1]. The method presented in [1] uses logit suppression via KL-divergence between the model’s softmax distribution and the uniform distribution. A potential alternative to this method is to aggregate all out-of-distribution instances into a single “outlier class” during training time.

## [[arxiv]](https://arxiv.org/abs/1803.05598) [[Official TF Repo]](https://github.com/google-research/google-research/tree/master/large_margin)
Both of these methods are compatible with a variety of loss functions. Among these, the margin loss [2] is of interest. We propose a set of experiments considering these outlier exposure methods with cross-entropy and margin losses.

<hr>
The following is the experiment matrix that we're interested in:

## Results
| | **Cross-Entropy** | **Margin Loss** |
|:------------|:------------------|:----------------|
|**Logit Suppression**| | |
|**Kitchen Sink**| | |

Testing only `MNIST` dataset in [jupyter](mnist.ipynb)
This draws heavily from the paper linked below.

## [[arxiv]](https://arxiv.org/abs/1803.05598) [[Official TF Repo]](https://github.com/google-research/google-research/tree/master/large_margin)

## TOOD
<hr>

## Results

Make paper figure
### Coming Soon!

0 comments on commit abaf196

Please sign in to comment.