Traditional test and evaluation approaches have focused on verifying that systems meet specified requirements or characterizing their effectiveness. These methods typically involve identifying relevant factors and quantifying uncertainty to determine how many samples are needed. However, AI-enabled systems—and other highly complex systems—vastly expand the possible state space and introduce nonlinear behaviors, making it impractical to gather enough real-world data to confidently ensure reliable performance in critical scenarios. This class addresses this challenge by broadening the types of evidence used to build confidence in system performance. Rather than relying solely on conventional testing, AI Assurance constructs structured arguments that link high-level claims about a system’s behavior to diverse forms of supporting evidence. This includes traditional test data as well as alternative sources, all tied together with clear reasoning and explicitly stated assumptions to justify trust in the system. In this seminar course, students will explore the emerging field of AI Assurance by reading and presenting academic papers, developing and presenting original research, and producing a publication-quality final paper as part of a capstone project.