👋 Hi, my name is Rodrigo and I'm a researcher at Meta AI. Currently, I work in the Behavioural Computing team, led by Maja Pantic, to develop new approaches for generative modelling and self-supervised learning on audio-visual speech.
📚 I completed my BSc in Information Systems and Computer Engineering at Instituto Superior Técnico, as well as my MSc and PhD in Computing at Imperial College London.
🖥️ I spent a substantial portion of my PhD interning and working at Meta AI, where I collaborated with multiple teams and developed my PhD research. I also joined Sony R&D in Tokyo briefly after completing my PhD, as a research intern, where I worked on video-to-audio generation.
🔬🤖 My research is focused on deep learning applied to audio-visual speech (i.e., faces, lip movements, and speech). In particular, I am interested in applying self-supervised learning to learn from unlabelled audio-visual speech. I am also interested and have experience in generative modeling, particularly in generating speech using generative adversarial networks (GANs) and diffusion models.
🗣️ I have given live talks about my research in multiple conferences including CVPR, Interspeech, and ICASSP. I also engage with the machine learning online community directly through platforms such as Twitter and Reddit.
🎸🎾 During my free time, I enjoy playing the guitar and bass (usually with a band). I also play squash with my colleagues from Imperial College London on a weekly basis.
- 📫 How to reach me: rs2517(at)ic.ac.uk.
- Personal page: https://miraodasilva.github.io/
- Google Scholar: https://scholar.google.com/citations?user=08YfKjcAAAAJ&hl
- ResearchGate: https://www.researchgate.net/profile/Rodrigo_Mira3
- Twitter: https://twitter.com/RodrigomiraA
- Reddit: https://www.reddit.com/user/MiraoDaSilva
- Linkedin: https://www.linkedin.com/in/https://www.linkedin.com/in/rodrigo-mira-670bbb151/
- YouTube: https://www.youtube.com/@rodrigomiraai2111
- ORCID: https://orcid.org/my-orcid?orcid=0000-0002-9493-3842