OPENAI, the company behind ChatGPT, is launching a program to find security flaws that could allow internet users to earn up to $20,000. The company is putting financial incentives into the mission of finding major vulnerabilities in the operation of the AI chatbot.

This initiative, dubbed OpenAI's Bug Bounty Program, is designed to pay anyone who exposes security flaws and other vulnerabilities. Earnings range from $200 to $20,000, depending on how dangerous the flaw discovered is.

The hunt for bugs is on. 

OpenAI outlines that it "appreciate[s] ethical hackers who help [it] uphold high privacy and security standards for [its] users and technology." ,

They are hopeful that this appeal will help the company secure its tools, maintain its privacy and security standards high for the millions of ChatGPT users around the world.

A form is already in place to allow these ethical hackers to report possible flaws. The reward will vary depending on the type of vulnerability detected, but also on its "likelihood or impact," at OpenAI's sole discretion.

This program is not at all about evaluating or judging the quality or relevance of ChatGPT's answers to users.

What interests the technical teams here are rather issues related to authentication, authorization, payments or data exposure.

Obviously, the aim is to improve the overall security of ChatGPT. Indeed the conversational AI is currently in the eye of the storm, between fears about it at risk of being manipulated and issues about its potential to disclose sensitive personal data.

Even if you don't become a "prompt engineer," you can still hope to earn some money with ChatGPT.