[CFP] Annals of Operations Research Special Issue - Decision-Making Under Uncertainty: A Multidisciplinary Perspective

19 views
Skip to first unread message

Nan Ye

unread,
Sep 16, 2022, 11:07:04 AM9/16/22
to Reinforcement Learning Mailing List

Annals of Operations Research, Special Issue on 

Decision-Making Under Uncertainty: A Multidisciplinary Perspective


Submission deadline: April 15, 2023

How do we make good decisions in the presence of uncertainty? This question arises in numerous contexts, including natural resources management and robot planning and control. The past few decades have seen significant advances in decision-making under uncertainty. These range from new domain-independent methods in areas such as artificial intelligence, statistics, operations research, robot planning, and control theory, to novel domain-specific methods in fields such as ecology, fisheries, economics, and mathematical finance. Unfortunately, progress in one domain may often be easily overlooked by researchers from another community.

This special issue calls for papers that provide a multidisciplinary perspective on the theory, practice, and computational techniques for decision-making under uncertainty. Submissions should demonstrate how the work is relevant to researchers from different communities. Examples include theoretical studies of decision models relevant to disparate fields, and novel applications of tools from one field to another.

Potential topics include, but are not limited to the following:

  • Decision models (e.g., Markov Decision Processes [MDPs], POMDPs)

  • Decision theory (e.g., expected utility theory, bounded rationality)

  • Planning under uncertainty

  • Reinforcement learning

  • Stochastic control (e.g., LQG, robust control)

  • Operations research

  • Applications (e.g., natural resource management, robot autonomy, pandemic management, natural disaster response, portfolio management)

Instructions for authors can be found at: https://www.springer.com/journal/10479/submission-guidelines

Authors should submit a cover letter and a manuscript by April 15, 2023, via the Journal’s online submission site. Please see the Author Instructions on the web site if you have not yet submitted a paper through Springer's web-based system, Editorial Manager. When prompted for the article type, please select Original Research. On the Additional Information screen, you will be asked if the manuscript belongs to a special issue, please choose yes and the special issue’s title, Decision-Making Under Uncertainty: A Multidisciplinary Perspective, to ensure that it will be reviewed for this special issue. Manuscripts submitted after the deadline may not be considered for the special issue and may be transferred, if accepted, to a regular issue.

Papers will be subject to a strict review process under the supervision of the Guest Editors, and accepted papers will be published online individually, before print publication.

Guest Editors

Nan Ye, The University of Queensland
Hanna Kurniawati, Australian National University
Marcus Hoerger, The University of Queensland
Dirk Kroese, The University of Queensland
Jerzy Filar, The University of Queensland


Warren Powell

unread,
Sep 16, 2022, 11:35:08 AM9/16/22
to rl-...@googlegroups.com, yena...@gmail.com
  Wow!  This special issue looks great! I wish I was still writing papers!

I like the list of some of the different subfields in stochastic optimization for the special issue.  Section 2 of my new book (https://tinyurl.com/RLandSO/) lists 15 distinct fields (section 2.1 on stochastic search is subdivided into derivative-free and derivative-based stochastic search).  The book offers a unified framework for *any* sequential decision problem (which covers all the subtopics listed for this special issue).  I then divide these into two broad classes:

1. Pure learning problems - These are settings where the *problem* is static, which means independent of any state variable - the only state variable is a belief about an unknown function.  These are covered in chapters 5-7.
2. State-dependent problems (chapters 9-20) - These are the much richer class of state-dependent problems, where there can be a physical state variable (e.g. inventory), other information (prices, weather), and beliefs.  Introducing belief states into traditional dynamic programs is very rich.  Some will refer to these as POMDPs, but I think the POMDP literature is quite confused.  See my discussion of this in chapter 20 of my book.

Anyone who wants a quick peek into my way of thinking should go to


Just click on the front cover of the book to download the pdf.  It is being published by NOW Publishing (which handles the "Foundations and Trends" series), but it will always be available as a free download. Chapter 1 gives a quick overview of the modeling framework and the four classes of policies.  The remaining chapters (other than chapter 7) each illustrate the framework in the context of some application designed to bring out specific issues (such as illustrating all four classes of policies).  

Warren


------------------------------
Warren B. Powell
Chief Analytics Officer, Optimal Dynamics
Professor Emeritus, Princeton University


--
You received this message because you are subscribed to the "Reinforcement Learning Mailing List" group.
To post to this group, send email to rl-...@googlegroups.com
To unsubscribe from this group, send email to
rl-list-u...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/rl-list?hl=en
---
You received this message because you are subscribed to the Google Groups "Reinforcement Learning Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to rl-list+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/rl-list/8aa37b5c-b843-4cc6-bbb9-fd0e08e90ee6n%40googlegroups.com.
Reply all
Reply to author
Forward
0 new messages