I started my career in experimental quantum computing with a Masters degree and a few publications in noise-tolerant quantum control of trapped-ion qubits. I took 5 years to try my hand at the world of business and startups, part of which involved launching the international expansion of a Spanish logistics startup in the UK. It was a blast, but not enough to keep me from being drawn back to my technical/analytical roots. In 2019 I watched the AlphaGo documentary, trained some reinforcement learning agents in gym, and trained an MNIST classifier. The ML bug got a hold of me, and I haven't looked back since.
I love to explore and contribute to the ML ecosystem whenever I can. See below for some highlights of my public work. For a summary of my professional curriculum, please see my LinkedIn.
I am currently a member of Hugging Face's robotics team and am a core contributor to their open source robotics library LeRobot. I mainly focus on modeling, training, and evaluation. To add my own personal touch to this work, I've started a repository of little experiments where I do deep dives into various aspects of robot learning models.
I distilled Diffusion Policys into consistency models. This was part of a push for me to understand diffusion models in depth. See the writeup and code here.
Contributed TorchVision FX feature extraction
This contribution leverages PyTorch's symbolic tracing toolkit to provide a compact and intuitive API for extracting hidden layers from TorchVision models.
I authored a related blog post in the official PyTorch blog.
I also made a YouTube tutorial.
Contributions to timm
timm
is the go-to library for SOTA vision backbones in PyTorch. Some of my contributions include:
- Porting Nested Hierarchical Transformers from the official Jax implementation.
- Adapting XCiT.
- Adding support for FX feature extraction.
- Developing a handy little utility for freezing and unfreezing model weights while handling batch-norm parameters separately.
I believe in teaching to learn, so I occasionally record a screencast of myself explaining an ML concept. Check out my YouTube channel. This video on understanding attention in transformers has been particularly popular.
Kaggle was a great resource for spinning up my ML knowledge.
In the Bristol Myers Squibb - Molecular Translation competition I landed 27th place (9th amongst solo competitors). For this GIF, I visualize one of the attention maps in my vision transformer + text decoder while it transcribes the molecule's international chemical identifier.
30th place in Kaggle's Global Wheat Detection competition.
Interactive web demo of GANSpace
After doing a short introductory course to Angular, I flexed my skills with a web-based front-end that would allow users to flexibly tune attributes of a GAN's output. At the time this was mind-blowing stuff for the general population and computer vision practitioners alike (can you believe that was just 2019!).
Just before jumping into ML, I took a quick detour back to quantum computing to check what I'd missed. I'm a strong believer in teaching to learn. So I made a tutorial on VQEs. Check it out here.