Stochastic optimization plays a large role in modern learning algorithms and in the analysis and control of modern systems. This course introduces the fundamental issues in stochastic search and optimization, with special emphasis on cases where classical deterministic search techniques (steepest descent, Newton–Raphson, linear and nonlinear programming, etc.) do not readily apply. These cases include many important practical problems in engineering, computer science, machine learning, and elsewhere, which will be briefly discussed throughout the course. Discrete and continuous optimization problems will be considered. Algorithms for global and local optimization problems will be discussed. Methods such as random search, least mean squares (LMS), stochastic approximation, stochastic gradient, simulated annealing, evolutionary computation (including genetic algorithms), and stochastic discrete optimization will be discussed.
Multivariate calculus, linear algebra, and one semester of graduate probability and statistics (e.g., 625.603 Statistical Methods and Data Analysis). Some computer-based homework assignments will be given. It is recommended that this course be taken only in the last half of a student’s degree program.