NTU Researchers were able to jailbreak popular AI chatbots including ChatGPT , Google Bard , and Bing Chat . With the jailbreaks in place, targetted chatbots would generate valid responses to malicious queries, thereby testing the limits of large language model (LLM) ethics. This research was done by Professor Liu Yang and NTU PhD students Mr Deng Gelei and Mr Liu Yi who co-authored the paper and were able to create proof-of-concept attack methods.
The method used to jailbreak an AI