Simplified Risk-Aware POMDP Planning

With the recent advent of risk awareness, decision-making algorithms’ complexity increases, posing a severe difficulty to solve such formulations of the problem online. Our approach is centered on the distribution of the return in the challenging continuous domain under partial observability. In this research proejct we introduce and investigate a simplification framework to ease the computational burden while providing guarantees on the simplification impact. On top of this framework, we present novel stochastic bounds on the return that apply to any reward function. Further, we consider simplification’s impact on decision making with risk averse objectives, which, to the best of our knowledge, has not been investigated thus far.

Related Publications:

Journal Articles

  1. A. Zhitnikov and V. Indelman, “Simplified Risk Aware Decision Making with Belief Dependent Rewards in Partially Observable Domains,” Artificial Intelligence, Special Issue on “Risk-Aware Autonomous Systems: Theory and Practice", Aug. 2022.
    Zhitnikov22ai.pdf DOI: 10.1016/j.artint.2022.103775

Technical Reports

  1. Y. Pariente and V. Indelman, “Simplification of Risk Averse POMDPs with Performance Guarantees,” 2024.
    arXiv: https://arxiv.org/pdf/2406.03000
  2. A. Zhitnikov and V. Indelman, “Risk Aware Belief-dependent Constrained Simplified POMDP Planning,” Sep. 2022.
    arXiv: https://arxiv.org/pdf/2209.02679.pdf

Conference Articles

  1. A. Zhitnikov and V. Indelman, “Simplified Risk-aware Decision Making with Belief-dependent Rewards in Partially Observable Domains,” in International Joint Conference on Artificial Intelligence (IJCAI), journal track, Aug. 2023.
    Zhitnikov23ijcai.pdf Zhitnikov23ijcai.poster