Reinforcement Learning for Safer AI Agents: Meta Project

Nafisah

Nafisah Animashaun

Like this project

Posted Nov 5, 2025

Worked with Meta on AI agent training using RLHF to improve model reasoning, safety, and alignment through ethical fine-tuning and evaluation.