This project explores using RL algorithms (GRPO, PPO, DPO) to train language models as adversarial agents that can systematically discover vulnerabilities in other LLMs. Think of it as "AI vs AI" for ...