AI-enabled systems (and other highly complex systems) explode the traditional Test and Evaluation (T&E) state space and introduce nonlinearities, which challenge research efforts to collect enough real-world data to confidently assert that critical systems will perform as designed. AI Assurance solves this by expanding the kinds of information used to create that confidence. This course will examine how AI Assurance takes high-level claims about a system’s behavior, builds systematic arguments that are supported by evidence, and weaves a justification of the claim with reasoning and underlying assumptions. Students will read and present on papers related to assuring AI-enabled systems, have an opportunity to present original work in the context of AI Assurance, and write a quality paper as part of a course project.