Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[suggestion enhancement] references of prompt injection -> jailbreaking, call out Many-Shot Jailbreaking explicitly #281

Open
6 tasks done
GangGreenTemperTatum opened this issue Apr 3, 2024 · 0 comments
Assignees

Comments

@GangGreenTemperTatum
Copy link
Collaborator

GangGreenTemperTatum commented Apr 3, 2024

Remember, an issue is not the place to ask questions. You can use our Slack channel for that, or you may want to start a discussion on the Discussion Board.

When reporting an issue, please be sure to include the following:

  • Before you open an issue, please check if a similar issue already exists or has been closed before.
  • A descriptive title and apply the specific LLM-0-10 label relative to the entry. See our available labels.
  • A description of the problem you're trying to solve, including why you think this is a problem
  • If the enhancement changes current behavior, reasons why your solution is better
  • What artifact and version of the project you're referencing, and the location (I.E OWASP site, llmtop10.com, repo)
  • The behavior you expect to see, and the actual behavior

Steps to Reproduce


  1. NA

What happens?


2_0_vulns/LLM01_PromptInjection.md here

i think we should segregate Basics of prompt injection, Jailbreaking, Prompt Leaking, Prompt Hijacking and Indirect Injections as separate entities

since context windows are now longer and more powerful, Many-Shot Jailbreaking is a common technique which was not the case back in v1.0 of our project and therefore i think it should be called out as a technique or at least a reference

What were you expecting to happen?


Any logs, error output, etc?


Any other comments?


Posted in #team-llm-promptinjection here

Solid references:

image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants