This project explores using RL algorithms (GRPO, PPO, DPO) to train language models as adversarial agents that can systematically discover vulnerabilities in other LLMs. Think of it as "AI vs AI" for ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results