-
While studying different approaches to planning algorithms in Prolog and reviewing the literature, I have noticed two broad classes of planning algorithms. The first category are the ones I am most familiar with -- A*, DFS/IDFS, BFS, lowest cost, best first, etc, and variants such as goal regression (starting from the end and working to the beginning), while the other involves using CLP for sat solving. In Prolog planning algorithms in STRIPS-like environments, progressing from one state to a new state via an action would involve:
Visually, the progression could be envisioned like this: Here is a full example of this, using DCGs and IDFS. The other style I have see in using a constraint solver to put boolean labels on the states. This would be the modification: Here is an example of GRAPHPLAN that uses this technique (not perfect, but runs in scryer ). (Note: yes, I KNOW there are major issues with the code, there are many other discussions open where I am working through the techniques as I learn to write it more effectively :) ) Notice this part in partcular: % add_actions( StateLev, Acts, ActLevO, ActLev, NextStateLevO, NextStateLev):
% ActLev = ActLevO + Acts that are potentially possible in StateLev
% NextStateLev = NextStateLevO + effects of Acts that are potentially possible in StateLev
% Set up constraints between actions in ActLev and both state levels
add_actions(_, [], ActionLev, ActionLev, NextStateLev, NextStateLev).
add_actions( StateLev, [ action( A, PreCond, Effects) | Acts], ActLevO, ActLev, NextLevO,
NextLev) :-
( TA in 0..1, % Boolean indicator variable for action A
includes(StateLev, PreCond, TA), % A is potentially possible in StateLev
add_effects(TA, Effects, NextLevO, NextLevl),
add_actions(StateLev, Acts, [A/TA | ActLevO], ActLev, NextLevl, NextLev)
; add_actions(StateLev, Acts, ActLevO, ActLev, NextLevO, NextLev)
). Now, again, the GRAPHPLAN code is a mess. While I was working on the rewrite, I noticed that there is no need to use boolean constraints to solve the problem, it could be written the same way I did the DCG/IDFS solver. However, in certain cases, such as this planner/scheduler, I would be making heavy use of clpz anyway (although notice in this code, they did not use clpz as a sat solver). Now again, keeping in mind that the above GRAPHPLAN is not perfect, the main thing I am trying to understand is what are the tradeoffs for using clpz as a sat solver to give a binary success/fail label to states vs using predicates to test for success/failure of the states in these sorts of planning algorithms?. For clarity I would also like to point that I do not mean to suggest a brute force testing of all possible states. What I mean to imply is that for the same algorithm (brute force, or using a heuristic, or hill climbing, or whatever) to find a (single) solution OR the OPTIMAL solution, would it be preferable to use clpz for satisfiability testing or a predicate for satisfiability testing? Hopefully the question makes some sense. Edit: updated question to reflect not just clp(z) but also clp(b). Consideration: can clp(b) "see" what clp(z) sees? So for instance, if I was doing a scheduling algorithm and making heavy use of clpz anyway, would using clp(b) in conjunction with clp(z) be superior to using clp(z) for all CLP related work? So this gives us several approaches. clpz for integer constraints and boolean constraints OR or perhaps some other combination of approaches. |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 5 replies
-
Have you considered |
Beta Was this translation helpful? Give feedback.
So as far as I can tell, there is no reason to not use CLP whenever possible while doing search/planning/scheduling/CSP. Especially by the time you are doing crazy stuff like distributed multi-agent angelic planning and hierarchical task networks, using CLP makes the notation very familiar are really no different than most other planning methods.
If anyone disagrees, please let me know!