OpenAI has started a bug bounty program for ChatGPT, but it won’t take jailbreaks.

On 11 March, the company launched a bug bounty program that pays cash to people who find security flaws in OpenAI’s systems and report them.

You can now earn money from ChatGPT by finding bugs in the most popular Chatbot.

ChatGPT offers different rewards ranging from $200 for low-severity findings to up to $20,000 for exceptional discoveries,” OpenAI says that the program is being run through Bugcrowd, a bug bounty platform. 

You can participate in the Bug bounty program for ChatGPT using this button

Join ChatGPT Bug Bounty Programm

But OpenAI won’t take jailbreaks for ChatGPT or text messages that try to trick the AI program into breaking its own rules. Since ChatGPT first came out, people have found ways to jailbreak it to post swear words, write about political topics that aren’t allowed, or even make malware.

The bug bounty program will also not take reports of ChatGPT giving wrong information. “Model safety issues don’t work well in a bug bounty program because they aren’t single bugs that can be fixed directly,” says OpenAI. “Tackling these problems often requires a lot of research and a broader view.” (Users can report problems with a model’s safety on a different form.)