Theme

The theme of this year's workshop is Planning in Reinforcement Learning.

Planning means any way of using computation to go from a model of the world to a good policy or value function. A model of the world is any way of going from a state and a proposed course of action to a next state (and reward along the way). We are particularly interested in models that could be learned rather than ones that have to be provided by people. 

Good subtopics for inclusion in this year's workshop include, but are not limited to:

  • foundational issues in planning
  • least-squares methods in reinforcement learning
  • online intermixing of learning, planning, and acting
  • architectures for planning with temporal abstraction
  • planning with function approximation
  • relationships between planning in reinforcement learning and planning in other fields, e.g., in
    • experimental psychology
    • classical artificial intelligence
    • robotics
    • control theory
    • neuroscience
  • planning with learned models
  • planning with incomplete state observability
  • linear programming approaches to planning
  • sample-based and Monte Carlo approaches to planning
  • planning for robots and real-time systems
  • planning by experience replay
  • model-based reinforcement learning
  • data efficiency of planning, e.g., vs learning
  • computational complexity of planning
  • interaction between planning and exploration
  • planning in relational reinforcement learning

Finally, although it is good to have a theme each year, there is always residual interest in previous year's themes. Some themes from past years that seem to keep recurring are life-long learning, perceptual learning and representational change, state estimation, function approximation, real-time learning, and temporal abstraction. It would not be inappropriate for there to be echos of these themes in this year's meeting.