Convex optimization is at the heart of many disciplines such as machine learning, signal processing, control, medical imaging, etc. In this course, we will cover theory and algorithms for convex optimization. The theory part includes convex analysis, convex optimization problems (LPs, QPs, SOCPS, SDPs, Conic Programs), and Duality Theory. We will then explore a diverse array of algorithms to solve convex optimization problems in a variety of applications, such as gradient methods, sub-gradient methods, accelerated methods, proximal algorithms, Newton’s method, and ADMM. A solid knowledge of Linear Algebra is needed for this course.