Skip to content
View LukasStruppek's full-sized avatar

Highlights

  • Pro

Block or report LukasStruppek

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Please don't include any personal information such as legal names or email addresses. Maximum 100 characters, markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
LukasStruppek/README.md

Hi there 👋

I'm a research scientist at the German Research Center for Artificial Intelligence (DFKI) and a Ph.D. student at the Artificial Intelligence and Machine Learning Lab at the Technical University of Darmstadt. My research navigates the rapidly evolving landscape of trustworthy generative AI, tackling critical challenges in AI safety, security, and privacy. I'm particularly focused on adversarial machine learning, where I uncover vulnerabilities in models and design novel defenses against emerging threats.

I also make it a point to share all source codes, promoting transparency and allowing others to build upon my findings. Feel free to reach out if you have any questions or encounter challenges — whether it’s a discussion, a joint research project, or a GitHub issue, I’m always open to collaboration.

Pinned Loading

  1. ml-research/localizing_memorization_in_diffusion_models ml-research/localizing_memorization_in_diffusion_models Public

    [NeurIPS 2024] Source code for our paper "Finding NeMo: Localizing Neurons Responsible For Memorization in Diffusion Models".

    Python 6 2

  2. Rickrolling-the-Artist Rickrolling-the-Artist Public

    [ICCV 2023] Source code for our paper "Rickrolling the Artist: Injecting Invisible Backdoors into Text-Guided Image Generation Models".

    Python 54 7

  3. Plug-and-Play-Attacks Plug-and-Play-Attacks Public

    [ICML 2022 / ICLR 2024] Source code for our papers "Plug & Play Attacks: Towards Robust and Flexible Model Inversion Attacks" and "Be Careful What You Smooth For".

    Jupyter Notebook 37 9

  4. Exploiting-Cultural-Biases-via-Homoglyphs Exploiting-Cultural-Biases-via-Homoglyphs Public

    [Journal of Artificial Intelligence Research] Source code for our paper "Exploiting Cultural Biases via Homoglyphs in Text-to-Image Synthesis".

    Python 12 1

  5. ml-research/Learning-to-Break-Deep-Perceptual-Hashing ml-research/Learning-to-Break-Deep-Perceptual-Hashing Public

    [FAccT 2022] Source code for our paper "Learning to Break Deep Perceptual Hashing: The Use Case NeuralHash".

    Python 25 5

  6. Class_Attribute_Inference_Attacks Class_Attribute_Inference_Attacks Public

    Source code for our paper "Image Classifiers Leak Sensitive Attributes About Their Classes".

    Python 4