IBM researchers manage to “hypnotize” ChatGPT into misbehaving
IBM security researchers claim to have successfully “hypnotized” generative AI models such as ChatGPT or Bard into disclosing sensitive financial information, generating malicious code, encouraging users to pay a ransom and even advising drivers to run red lights. The researchers were able to trick the models into generating incorrect responses while playing a game. “Our … Continue reading IBM researchers manage to “hypnotize” ChatGPT into misbehaving More
Copy and paste this URL into your WordPress site to embed
Copy and paste this code into your site to embed