From abaf1961c1c6a72675af2d3931c9e1476d74c3be Mon Sep 17 00:00:00 2001 From: tnoel20 Date: Thu, 7 Jul 2022 15:07:09 -0700 Subject: [PATCH] Updated readme --- readme.md | 22 ++++++++++++++-------- 1 file changed, 14 insertions(+), 8 deletions(-) diff --git a/readme.md b/readme.md index dc15e93..756f724 100644 --- a/readme.md +++ b/readme.md @@ -1,16 +1,22 @@ -# Large Margin Loss +# Thomas Noel's Master's Degree Project -A Pytorch implementation of `Large Margin Deep Networks for Classification` +Outlier exposure (OE) has been shown to be an effective method to improve anomaly detection performance at test time [1]. The method presented in [1] uses logit suppression via KL-divergence between the model’s softmax distribution and the uniform distribution. A potential alternative to this method is to aggregate all out-of-distribution instances into a single “outlier class” during training time. -## [[arxiv]](https://arxiv.org/abs/1803.05598) [[Official TF Repo]](https://github.com/google-research/google-research/tree/master/large_margin) +Both of these methods are compatible with a variety of loss functions. Among these, the margin loss [2] is of interest. We propose a set of experiments considering these outlier exposure methods with cross-entropy and margin losses. -
+The following is the experiment matrix that we're interested in: -## Results +| | **Cross-Entropy** | **Margin Loss** | +|:------------|:------------------|:----------------| +|**Logit Suppression**| | | +|**Kitchen Sink**| | | -Testing only `MNIST` dataset in [jupyter](mnist.ipynb) +This draws heavily from the paper linked below. +## [[arxiv]](https://arxiv.org/abs/1803.05598) [[Official TF Repo]](https://github.com/google-research/google-research/tree/master/large_margin) -## TOOD +
+ +## Results -Make paper figure \ No newline at end of file +### Coming Soon! \ No newline at end of file