Skip to content

This is The most comprehensive prompt hacking course available, which record our progress on a prompt engineering and prompt hacking course.

License

Notifications You must be signed in to change notification settings

TrustAI-laboratory/Learn-Prompt-Hacking

Repository files navigation

Learn Prompt Hacking

GitHub Contributors GitHub Last Commit GitHub Issues GitHub Pull Requests Github License

t0151f1fc4110babf75

This is The most comprehensive prompt hacking course available, which record our progress on a prompt engineering and prompt hacking course,

  • Prompt Engineering Technology.
  • GenAI Development Technology.
  • Prompt Hacking Technology.
    • ChatGPT Jailbreaks
    • GPT Assistants Prompt Leaks
    • GPTs Prompt Injection
    • LLM Prompt Security
    • Super Prompts
    • Prompt Hack
    • Prompt Security
    • Adversarial Machine Learning.
  • LLM Security Defence Technology.
  • LLM Hacking Resources
  • LLM Security Papers
  • Conference Slides

Background

With the release of ChatGPT, LLMs have become increasingly mainstream, revolutionizing the way we interact with AI systems. Prior to ChatGPT, there were several notable advancements in NLP that have laid the foundation for this revolution, including the "Attention is All You Need" paper by Vaswani et. al., BERT, GPT-2, GPT-3, T5, RoBERTa, ELECTRA, and ALBERT.

Although these advancements are highly important, they may not be widely known to the general public. The year 2023 marks a turning point in the mass adoption of these general-purpose models across various industries for generative tasks.

As a Data Scientist and AI developer, continuous learning is a key attribute, and staying on the cutting edge of LLM techniques is essential for providing optimally viable solutions in the era of AI-driven Natural Language Processing.

On the other hand, the rapid arrival of AI has also brought a large number of new attack surfaces and risks to the entire IT software ecosystem. Data scientists and developers also need to pay attention to LLM security issues.

Course Objective

The primary goal of this course is:

  • Gain a deep understanding of prompt engineering techniques for effective interaction with LLMs. By mastering these strategies, aim to improve the ability to develop innovative, effective, and efficient solutions using the power of natural language.
  • Gain a basic understanding of the risks faced by LLM applications and learn to how to mitigate or prevent the GenAI App risk.

Other resources

About

This is The most comprehensive prompt hacking course available, which record our progress on a prompt engineering and prompt hacking course.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published