Our Technical Reading Group runs every semester and provides an accessible introduction to cutting-edge AI safety research. We cover essential topics including learning from human feedback, interpretability, alignment techniques, and emerging safety methodologies.

The group meets weekly to discuss carefully selected papers that build foundational knowledge in technical AI safety.

Food is provided at every meeting and no work is required outside of our sessions - just show up ready to engage with fascinating research and thoughtful discussion.

You can view materials and papers from our previous iteration to get a sense of the topics we cover and the depth of discussion.

Apply for Fall 2025 Technical Reading Group
AI Alignment Concept - Human and AI with Compass