Symbolic planners for problems and domains specified in PDDL.
Make sure PDDL.jl is installed. For the stable version, press ]
to enter the Julia package manager REPL, then run:
add SymbolicPlanners
For the latest development version, run:
add https://github.com/JuliaPlanners/SymbolicPlanners.jl
- Forward state-space planning (A*, BFS, etc.)
- Backward (i.e. regression) planning
- Policy-based planning (RTDP, RTHS, MCTS, etc.)
- Relaxed-distance heuristics (Manhattan, hadd, hmax, etc.)
- Policy and plan simulation
- Modular framework for goal, reward and cost specifications
- Support for PDDL domains with numeric fluents and custom datatypes
A simple usage example is shown below. More information can be found in the documentation.
using PDDL, PlanningDomains, SymbolicPlanners
# Load Blocksworld domain and problem
domain = load_domain(:blocksworld)
problem = load_problem(:blocksworld, "problem-4")
# Construct initial state from domain and problem
state = initstate(domain, problem)
# Construct goal specification that requires minimizing plan length
spec = MinStepsGoal(problem)
# Construct A* planner with h_add heuristic
planner = AStarPlanner(HAdd())
# Find a solution given the initial state and specification
sol = planner(domain, state, spec)
- Forward Breadth-First Search
- Forward Heuristic Search (A*, Greedy, etc.)
- Backward Heuristic Search (A*, Greedy, etc.)
- Bidirectional Heuristic Search
- Real Time Dynamic Programming (RTDP)
- Real Time Heuristic Search (RTHS)
- Monte Carlo Tree Search (MCTS)
- FastDownward, Pyperplan, and ENHSP wrappers
- Goal Count: counts the number of unsatisfied goals
- Manhattan: L1 distance for arbitrary numeric fluents
- Euclidean: L2 distance for arbitrary numeric fluents
- HSP heuristics: hadd, hmax, etc.
- HSPr heuristics: the above, but for backward search
- FF heuristic: length of a relaxed plan, used by the Fast-Forward planner
- LMCut heuristic: admissible heuristic based on action landmarks
- MinStepsGoal: Minimize steps to reach a (symbolically-defined) goal
- MinMetricGoal: Minimize a metric formula when reaching a goal
- MaxMetricGoal: Maximize a metric formula when reaching a goal
- StateConstrainedGoal: Adds state constraints that must hold throughout a plan
- GoalReward: Achieve reward upon reaching a goal state
- BonusGoalReward: Adds goal reward to an existing specification
- MultiGoalReward: Achieve separate rewards for achieving separate goals
- DiscountedReward: Discounts the rewards or costs of an existing specification
After Julia's JIT compilation, and using the same search algorithm (A*) and search heuristic (hadd), SymbolicPlanners.jl with the PDDL.jl compiler is (as of February 2022):
- 10 to 50 times as fast as Pyperplan,
- 0.1 to 1.2 times as fast as FastDownward,
- 0.7 to 36 times as fast as ENHSP on numeric domains without action costs.
A comparison on domains and problems from the 2000 and 2002 International Planning Competitions is shown below. Runtimes are relative to SymbolicPlanners.jl using the PDDL.jl compiler. In each cell, we report the first quartile (Q1), median (M), and third quartile (Q3) across solved problems. Experiment code is available here.