This repository is a result of great hands-on experience I had while hacking around Shakespeare GPT pretraining codes in nanoGPT. Thanks to the original source, I was able to:
- implement forward computation logic of causal attention from scratch to comprehend what's going on inside any high-level libraries like transformers(implementation and corresponding illustration can be found in this module and notebook respectively).
- modify original custom code for DDP implementation to use accelerate library(since pretraining Shakespeare GPT would only require single GPU) and experience the philosophy of the library that aims to provide unified and seamless API usages for diverse specific hardware use-cases.