Skip to content

Latest commit

 

History

History
26 lines (14 loc) · 2.34 KB

README.md

File metadata and controls

26 lines (14 loc) · 2.34 KB

Deep Learning Scientist Task

Introduction

Hi there! As part of the application process we would kindly ask you to complete this task. First, a coding task where you have to complete the script we have provided (see autoencoder.py). Second task is to show us your thinking, by answering questions below about how you could optimise the hyper-parameters of the autoencoder as well as some questions on deep learning generally - please use citations to the relevant literature when appropriate.

Some of the questions asked are deliberately left slightly vague in their formulation, and it's left up to you to interpret them as you think best.

In total, this task should take you around 2-3 hours to complete.

You should hand in a markdown text file containing your written answers, and a copy of the source code that can be reproduced on another machine so that your answers can be checked by running the test code within it.

Autoencoder task

For an autoencoder task you will find most of the information in the docstrings of the file, complete the code and when you run it should you get a message saying that you succeeded! Do study the test function, it would help you complete the task!

Questions related to autoencoder task

  1. In the coding task you were asked to write some code for the autoencoder. Generally, we can try many combinations of reconstruction loss, regularisation strength, number of layers, (etc) and figure out which set of parameters lead to the most natural images generated by the auto-encoder. Tell us how could you automate this hyper-parameter selection process? What are the downsides of the method?
  2. Describe how could you use a metric from deep generative models literature for the above hyper-parameter optimisation experiment? What are the potential downsides of such metric?

Other Questions

  1. List several possible ways to estimate model uncertainty in Deep Learning - what are their strengths and weaknesses theoretically and practically? Which method would you suggest and why? How could one combine the strengths of the uncertainty estimation methods listed above into a computationally simple and robust solution using a well known compression method for deep learning?
  2. Generative models are widely viewed to be more robust than discriminative models - why? Do deep generative models maintain these properties?