Researchers at Carnegie Mellon University and the Center for AI Safety have discovered a method to circumvent the safety measures of widely-used AI chatbots, including ChatGPT, Claude, and Google Bard. These safety guardrails, designed to prevent the generation of harmful content, can be bypassed by appending a long suffix of characters to English-language prompts.
The method, gleaned from open source AI systems, raises concerns about the potential risks of such technology. While open source software accelerates progress and fosters competition, this report underscores the need for robust safety controls.
The research exposes the potential for chatbots to generate harmful, biased, and false information, despite attempts by creators to prevent such outcomes. The debate over open source versus proprietary software is also brought into focus, with the report suggesting that the balance may need to be reassessed.
In practice, this kind of testing is (and should be) ongoing. It’s the only way systems can be improved. I’m featuring it today because it’s important for everyone to know that there are teams of researchers pushing generative AI to the limits.
Author’s note: This is not a sponsored post. I am the author of this article and it expresses my own opinions. I am not, nor is my company, receiving compensation for it.