Robust and Risk-Averse Decision-Making Under Uncertainty

Robust and Risk-Averse Decision-Making Under Uncertainty


Robust and risk-averse decision-making under uncertainty in partially observable domains is a fundamental problem in AI and essential for reliable autonomous agents, e.g. in the context of safety, particularly in the presense of adversarial agents and uncertain stochastic models. In this research project, we focus on these challenging settings within the framework of a Partially Observable Markov Decision Process (POMDP). The project seeks to advance fundamental theory as well as develop prototype demonstrations on real robots.

Prerequisites:

  • Strong analytical skills, passion for rigorous mathematical formulations
  • Strong programming skills (preferably Python or Julia).
  • Background in probability and measure theory, MDPs/POMDPs, and (deep) reinforcement learning is an advantage.

Academic supervisor:

Duration: 1 or 2 semesters