Saturday, 23 November 2024
ChatRobot - What could possibly go wrong?
'It's Surprisingly Easy To Jailbreak LLM-Driven Robots'
Instead of focusing on chatbots, a new study reveals an automated way to breach LLM-driven robots "with 100 percent success," according to IEEE Spectrum. "By circumventing safety guardrails, researchers could manipulate self-driving systems into colliding with pedestrians and robot dogs into hunting for harmful places to detonate bombs..."
There are no published comments.
New comment