diff --git a/_modules/week-07.md b/_modules/week-07.md index dd7c431..d475068 100644 --- a/_modules/week-07.md +++ b/_modules/week-07.md @@ -35,7 +35,7 @@ The announcement can be made red for due dates as follows --> Oct 7 -: Reinforcement Learning with Human Feedback: Proximal Policy Optimization (PPO) and Direct Preference Optimization (DPO) +: [Reinforcement Learning with Human Feedback: Proximal Policy Optimization (PPO) and Direct Preference Optimization (DPO)](({{site.baseurl}}assets/files/rlhf.pptx) : [Ziegler RLHF Paper]({{site.baseurl}}assets/files/ziegler.pdf), [DPO Paper]({{site.baseurl}}assets/files/dpo.pdf) : Emily Weiss - [Don't Hallucinate, Abstain: Identifying LLM Knowledge Gaps via Multi-LLM Collaboration](https://arxiv.org/abs/2402.00367) : Questions by: Yifan Jiang