We may earn compensation from some listings on this page. Learn More
In the spotlight after ChatGPT’s tremendous success, OpenAI has launched a Bug Bounty programme where ethical hackers get to monetise their skills. With users depending on its tools such as ChatGPT and Dall-E to complete crucial tasks, OpenAI is paying people for pointing out its cybersecurity flaws.
The Bug Bounty programme by OpenAI is in line with a practice adopted by major tech firms, where they use the expertise of ethical hackers to stay a step ahead of cybercriminals. As the best minds from the ethical hacking ecosystem compete to flag loopholes to be fixed preemptively, they are also rewarded.
In this article, you’ll learn how to register for the OpenAI Bug Bounty programme, submit reports about security flaws, and earn money while helping secure tools such as ChatGPT and Dall-E
Imagine ChatGPT and Dall-E being compromised and hackers start pulling the strings while users are relying on these tools for crucial tasks. To prevent that from happening, OpenAI has launched its own bug bounty program, where people of the internet get to point out flaws in its cybersecurity.
Ethical hackers or netizens in general can submit their findings on Bugcrowd, and receive rewards for the same. But OpenAI isn’t offering any prizes for jailbreaking ChatGPT or creation of malicious codes using the AI tool.
Those who point out loopholes which can cause minor problems, will get $200 and the reward can go up to a whopping $20,000 dollars depending on the severity. Major tech firms like Google set aside as much as 12 million at a time to reward ethical hackers that participate in Bug Bounty programmes.
Although the job is best suited for ethical hackers who can think like their criminal counterparts, anyone from the general public can participate.
Now OpenAI mentions that bugs and security issues that can be fixed directly, as well as exceptional finds are the kind of flaws that ethical hackers need to look for. But at the same time it said jailbreaking or problems linked to model prompts and their responses are out of bounds for the Bug Bounty program.
This is ironic considering Jailbreaking is a major threat that involves tricking ChatGPT into breaking through its own filters by making the AI imitate itself. While reporting such issues under Bug Bounty program won’t fetch rewards, OpenAI wants people to flag them using separate forms.
The deal that Bug Bounty programs, such as the one launched by OpenAI offer is simple, instead of using hacking skills for breaking cybersecurity systems, be on the right side of the law. Ethical hackers can think like other hackers and use such skills to earn money without getting in trouble with the law or causing inconvenience or threats to firms and users.
It’s just like a safecracker using the skills to point out possible flaws in a safe, in order to strengthen it, rather than taking risks and wasting energy to break through.