In order to drive a future where artificial intelligence (AI) enabled autonomous systems are trustworthy contributors to society, these capabilities must be designed and verified for safe and reliable operation and they must be secure and resilient to adversarial attacks. Further, these AI enabled autonomous systems must be predictable, explainable and fair while seamlessly integrated into complex ecosystems alongside humans and technology where the dynamics of human-machine teaming are considered in the design of the intelligent system to enable assured decision-making. In this course, students are first introduced to the field of AI, covering fundamental concepts, theory, and solution techniques for intelligent agents to perceive, reason, plan, learn, infer, decide and act over time within an environment often under conditions of uncertainty. Subsequently, students will be introduced to the assurance of AI enabled autonomous systems, including the areas of AI and autonomy security, resilience, robustness, fairness, bias, explainability, safety, reliability and ethics. This course concludes by introducing the concept of human-machine teaming. Students develop a contextual understanding of the fundamental concepts, theory, problem domains, applications, methods, tools, and modeling approaches for assuring AI enabled autonomous systems. Students will implement the latest state-of-the-art algorithms, as well as discuss emerging research findings in AI assurance.