Probabilistic Foundations of Artificial Intelligence


How can we build systems that perform well in uncertain environments and unforeseen situations? How can we develop systems that exhibit ''intelligent'' behavior, without prescribing explicit rules? How can we build systems that learn from experience in order to improve their performance? We will study core modeling techniques and algorithms from statistics, optimization, planning, and control and study applications in areas such as sensor networks, robotics, and the Internet. The course is designed for upper-level undergraduate and graduate students.

Topics covered

  • Search (BFS, DFS, A*), constraint satisfaction and optimization
  • Tutorial in logic (propositional, first-order)
  • Probability
  • Bayesian Networks (models, exact and approximate inference, learning)
  • Temporal models (Hidden Markov Models, Dynamic Bayesian Networks)
  • Probabilistic planning (MDPs, POMDPs)
  • Reinforcement learning
  • Combining logic and probability



  • VVZ Information: See here.
  • Lecture: Friday 10-12 in CAB G 56
  • Recitations: Friday 13-14 in CAB G 56
  • Teaching assistants: Hastagiri Vanchinathan [hastagiri (at) inf (dot) ethz (dot) ch] and Yuxin Chen [yuxin (dot) chen (at) inf (dot) ethz (dot) ch]
  • Textbook: S. Russell, P. Norvig. Artificial Intelligence: A Modern Approach (3rd Edition).



  • TBA

Lecture Notes

Relevant Readings

  • Christopher M. Bishop. Pattern Recognition and Machine Learning. Springer, 2007 (optional)