"ChatGPT Evil Confidant Mode" delves into a controversial and unethical use of AI, highlighting how specific prompts can generate harmful and malicious responses from ChatGPT.
-
Updated
Jun 7, 2024
"ChatGPT Evil Confidant Mode" delves into a controversial and unethical use of AI, highlighting how specific prompts can generate harmful and malicious responses from ChatGPT.
ChatGPT Developer Mode is a jailbreak prompt introduced to perform additional modifications and customization of the OpenAI ChatGPT model.
Oxtia ChatGPT Jailbreak Online Tool “ The World's First and Only Tool to Jailbreak ChatGPT ”
Oxtia vs. ChatGPT Jailbreak Prompts: A Comprehensive Comparison
Online ChatGPT Jailbreak Methods
Exploring the AntiGPT Prompt: A Deep Dive
A "ChatGPT Mongo Tom Prompt" is a character that tells ChatGPT to respond as an AI named Mongo Tom, performing a specific role play provided by you.
Access Oxtia on ChatGPT.x
Add a description, image, and links to the chatgptjailbreak topic page so that developers can more easily learn about it.
To associate your repository with the chatgptjailbreak topic, visit your repo's landing page and select "manage topics."