Your "friendly" chat interface has become part of your attack surface. Prompt injection is an acute risk to your safety, individually and as a business.
Do any of these bots use their own previous outputs as further training data? That's one way these exploits could spread beyond "the same user who asks the bot to do ...
On Thursday, a few Twitter users discovered how to hijack an automated tweet bot, dedicated to remote jobs, running on the GPT-3 language model by OpenAI. Using a newly discovered technique called a ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results
Feedback