Simplified Belief-Dependent POMDP Planning

We consider online planning in partially observable domains. Solving the corresponding POMDP problem is a very challenging task, particularly in an online setting. In this research project, we develop approaches that aim to speed up online POMDP planning considering the challenging setting of belief-dependent rewards, which enable advanced capabilities for an agent, such as autonomous uncertainty reduction, informative planning, etc. We do so by considering simplification of different elements of a given decision-making problem and assessing and controlling online the simplification’s impact on the solution/performance.

Specifically, in one of the works below we focus on belief simplification, in terms of using only a subset of state samples, and use it to formulate bounds on the corresponding original belief-dependent rewards. These bounds in turn are used to perform branch pruning over the belief tree, in the process of calculating the optimal policy. We further introduce the notion of adaptive simplification, while re-using calculations between different simplification levels, and exploit it to prune, at each level in the belief tree, all branches but one. Therefore, our approach is guaranteed to find the optimal solution of the original problem but with substantial speedup. As a second key contribution, we derive novel analytical bounds for differential entropy, considering a sampling-based belief representation, which we believe are of interest on their own.

In another work we introduced a framework for quantifying online the effect of a simplification, alongside novel stochastic bounds on the return. Our bounds take advantage of the information encoded in the joint distribution of the original and simplified return. The proposed general framework is applicable to any bounds on the return to capture simplification outcomes.

Related Publications:

Journal Articles

  1. A. Zhitnikov and V. Indelman, “Simplified Continuous High Dimensional Belief Space Planning with Adaptive Probabilistic Belief-dependent Constraints,” IEEE Transactions on Robotics (T-RO), 2024.
    Zhitnikov24tro.pdf Zhitnikov24tro.slides DOI: 10.1109/TRO.2023.3341625
  2. A. Zhitnikov, O. Sztyglic, and V. Indelman, “No Compromise in Solution Quality: Speeding Up Belief-dependent Continuous POMDPs via Adaptive Multilevel Simplification,” International Journal of Robotics Research (IJRR), 2024.
    Zhitnikov24ijrr.pdf DOI: doi.org/10.1177/02783649241261398
  3. T. Yotam and V. Indelman, “Measurement Simplification in ρ-POMDP with Performance Guarantees,” IEEE Transactions on Robotics (T-RO), 2024.
    Yotam24tro.pdf DOI: doi.org/10.1109/TRO.2024.3424018
  4. M. Barenboim, M. Shienman, and V. Indelman, “Monte Carlo Planning in Hybrid Belief POMDPs,” IEEE Robotics and Automation Letters (RA-L), no. 8, Aug. 2023.
    Barenboim23ral.pdf Barenboim23ral.slides DOI: 10.1109/LRA.2023.3282773 Barenboim23ral.supplementary
  5. M. Barenboim, I. Lev-Yehudi, and V. Indelman, “Data Association Aware POMDP Planning with Hypothesis Pruning Performance Guarantees,” IEEE Robotics and Automation Letters (RA-L), no. 10, Oct. 2023.
    Barenboim23ral2.pdf Barenboim23ral2.slides DOI: 10.1109/LRA.2023.3311205 Barenboim23ral2.supplementary
  6. A. Zhitnikov and V. Indelman, “Simplified Risk Aware Decision Making with Belief Dependent Rewards in Partially Observable Domains,” Artificial Intelligence, Special Issue on “Risk-Aware Autonomous Systems: Theory and Practice", Aug. 2022.
    Zhitnikov22ai.pdf DOI: 10.1016/j.artint.2022.103775

Conference Articles

  1. I. Lev-Yehudi, M. Barenboim, and V. Indelman, “Simplifying Complex Observation Models in Continuous POMDP Planning with Probabilistic Guarantees and Practice,” in 38th AAAI Conference on Artificial Intelligence (AAAI-24), Feb. 2024.
    LevYehudi24aaai.pdf LevYehudi24aaai.slides LevYehudi24aaai.poster
  2. D. Kong and V. Indelman, “Simplified Belief Space Planning with an Alternative Observation Space and Formal Performance Guarantees,” in International Symposium of Robotics Research (ISRR), Dec. 2024.
    arXiv: https://arxiv.org/pdf/2410.07630
  3. A. Zhitnikov and V. Indelman, “Simplified Risk-aware Decision Making with Belief-dependent Rewards in Partially Observable Domains,” in International Joint Conference on Artificial Intelligence (IJCAI), journal track, Aug. 2023.
    Zhitnikov23ijcai.pdf Zhitnikov23ijcai.poster
  4. M. Barenboim and V. Indelman, “Online POMDP Planning with Anytime Deterministic Guarantees,” in Conference on Neural Information Processing Systems (NeurIPS), Dec. 2023.
    Barenboim23nips.pdf Barenboim23nips.supplementary Barenboim23nips.poster
  5. M. Shienman and V. Indelman, “D2A-BSP: Distilled Data Association Belief Space Planning with Performance Guarantees Under Budget Constraints,” in IEEE International Conference on Robotics and Automation (ICRA), *Outstanding Paper Award Finalist*, May 2022.
    Shienman22icra.pdf Shienman22icra.supplementary Shienman22icra.poster
  6. M. Barenboim and V. Indelman, “Adaptive Information Belief Space Planning,” in the 31st International Joint Conference on Artificial Intelligence and the 25th European Conference on Artificial Intelligence (IJCAI-ECAI), Jul. 2022.
    Barenboim22ijcai.pdf arXiv: https://arxiv.org/pdf/2201.05673.pdf Barenboim22ijcai.supplementary
  7. O. Sztyglic and V. Indelman, “Speeding up POMDP Planning via Simplification,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Oct. 2022.
    Sztyglic22iros.pdf Sztyglic22iros.supplementary Sztyglic22iros.video
  8. M. Shienman and V. Indelman, “Nonmyopic Distilled Data Association Belief Space Planning Under Budget Constraints,” in International Symposium on Robotics Research (ISRR), Sep. 2022.
    Shienman22isrr.pdf Shienman22isrr.supplementary

Technical Reports

  1. Y. Pariente and V. Indelman, “Simplification of Risk Averse POMDPs with Performance Guarantees,” 2024.
    arXiv: https://arxiv.org/pdf/2406.03000
  2. A. Zhitnikov and V. Indelman, “Anytime Probabilistically Constrained Provably Convergent Online Belief Space Planning,” 2024.
    arXiv: https://arxiv.org/abs/2411.06711
  3. G. Rotman and V. Indelman, “involve-MI: Informative Planning with High-Dimensional Non-Parametric Beliefs,” Sep. 2022.
    arXiv: https://arxiv.org/pdf/2209.11591.pdf
  4. A. Zhitnikov and V. Indelman, “Probabilistic Loss and its Online Characterization for Simplified Decision Making Under Uncertainty,” 2021.
    arXiv: https://arxiv.org/pdf/2105.05789.pdf
  5. O. Sztyglic, A. Zhitnikov, and V. Indelman, “Simplified Belief-Dependent Reward MCTS Planning with Guaranteed Tree Consistency,” May 2021.
    arXiv: https://arxiv.org/pdf/2105.14239.pdf

Theses

  1. A. Zhitnikov, “Simplification for Efficient Decision Making Under Uncertainty with General Distributions,” PhD thesis, Technion - Israel Institute of Technology, 2024.
    Zhitnikov24thesis.pdf Zhitnikov24thesis.slides Zhitnikov24thesis.video
  2. M. Barenboim, “Simplified POMDP Algorithms with Performance Guarantees,” PhD thesis, Technion - Israel Institute of Technology, 2024.
    Barenboim24thesis.pdf Barenboim24thesis.slides Barenboim24thesis.video
  3. T. Yotam, “Measurement Simplification in ρ-POMDP with Performance Guarantees,” Master's thesis, Technion - Israel Institute of Technology, 2023.
    Yotam23thesis.pdf Yotam23thesis.slides Yotam23thesis.video
  4. G. Rotman, “Efficient Informative Planning with High-dimensional Non-Gaussian Beliefs by Exploiting Structure,” Master's thesis, Technion - Israel Institute of Technology, 2022.
    Rotman22thesis.pdf Rotman22thesis.slides Rotman22thesis.video
  5. O. Sztyglic, “Online Partially Observable Markov Decision Process Planning via Simplification,” Master's thesis, Technion - Israel Institute of Technology, 2021.
    Sztyglic21thesis.pdf Sztyglic21thesis.slides Sztyglic21thesis.video

Book Chapters

  1. M. Shienman and V. Indelman, “Nonmyopic Distilled Data Association Belief Space Planning Under Budget Constraints,” in Robotics Research, Springer, 2023.
    Shienman23chapter.pdf DOI: 10.1007/978-3-031-25555-7_8