Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Value and QueryResolution serialization with serde #2493

Closed
wants to merge 10 commits into from

Conversation

bakaq
Copy link
Contributor

@bakaq bakaq commented Aug 16, 2024

This PR implements serde::Serialize for Value, QueryMatch, QueryResolution and QueryResolutionLine, giving them all access to a good JSON serialization. This is built on top of the improved representation and parsing of terms of #2475. For example, the result of calling Machine::run_query() on A = a(1,2.5), B = [asdf, "fdsa", C, _, true, false] ; Rat is 3 rdiv 7. can be serialized into this:

[
  {
    "bindings": {
      "A": { "functor": "a", "args": [1, 2.5] },
      "B": [
        { "atom": "asdf" },
        "fdsa",
        { "variable": "C" },
        { "variable": "_A" },
        true,
        false
      ]
    }
  },
  {
    "bindings": {
      "Rat": { "rational": [3, 7] }
    }
  }
]

I choose to represents the bindings like this because its much more convenient for end users than having to dig into lists of { "functor": "=", "args": [...] }, but there are also From implementations from all of these types to Value, so you could interpret the whole result of a query as a Prolog term if you really want. Also, notice that there is a level of indirection for the bindings, this is so that we could easily extend this to represent things like residual goals in a future API:

{
  "bindings": ["..."],
  "residual-goals": ["..."]
}

Some details: For integers that don't fit in an i64 (or in an u64 in the case of the denominator of a rational) they are represented as strings. So for the query Int is 10^100, Rat is 10^100 rdiv 7 gives:

[                    
  {                  
    "bindings": {                                                                                                
      "Int": {             
        "integer": "10000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000"
      },               
      "Rat": {                                                                                                   
        "rational": [
          "10000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
          7
        ]
      }           
    }                
  }                  
]

This doesn't provide implementations for Deserialize. They are much harder to implement and I don't think they would be very useful with the interface we have now.

src/machine/lib_machine.rs Outdated Show resolved Hide resolved
@jjtolton
Copy link

jjtolton commented Aug 16, 2024

first of all this is brilliant, and solves a lot of problems I'm currently dealing with the shared library.

This doesn't provide implementations for Deserialize. They are much harder to implement and I don't think they would be very useful with the interface we have now.

thinking about this is still a potentially useful exercise though as the popularity of the shared library grows and demands on the data become more intensive it will probably become sub optimal to be transferring data to the shared library interface via string serialization and serialization and at that point the sea style API that you discussed initially is going to become more more useful along with a zero copy pathway that will allow for a shared memory space between the runtimes. I agree that it may be a bit premature to start thinking about deserialization now but it might be good to at least start thinking about it.

@bakaq bakaq marked this pull request as draft August 17, 2024 22:26
@bakaq bakaq force-pushed the value_serde branch 2 times, most recently from ce4e8ed to e227b99 Compare August 19, 2024 21:28
@bakaq bakaq requested a review from Skgland August 19, 2024 21:31
@bakaq
Copy link
Contributor Author

bakaq commented Aug 19, 2024

I now use #[derive(Serialize)] for QueryMatch (and use it in QueryResolutionLine), and I migrated the integration tests to new interface (and to actually be an integration test in the tests/scryer_lib directory).

I just generated the integration tests file from the current JSON output. I think it's fine, but it's so many cases that I couldn't check them all manually. The major changes is that now there is a layer of indirection with { "bindings": { } }, and it differentiates strings and atoms, apart from that nothing should have changed. @lucksus may want to look at this to see if this is usable for his use case.

@bakaq bakaq marked this pull request as ready for review August 19, 2024 21:37
Copy link
Contributor

@Skgland Skgland left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Two notes, otherwise this looks good.

src/machine/parsed_results.rs Outdated Show resolved Hide resolved
src/machine/parsed_results.rs Outdated Show resolved Hide resolved
@bakaq bakaq requested a review from Skgland August 20, 2024 17:39
Copy link
Contributor

@Skgland Skgland left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A small nit that is also fine to just ignore, otherwise this looks good to me.

src/machine/parsed_results.rs Outdated Show resolved Hide resolved
@bakaq
Copy link
Contributor Author

bakaq commented Aug 20, 2024

Ok, so with that I think this is ready to merge @mthom.

@Skgland
Copy link
Contributor

Skgland commented Aug 20, 2024

Should merging maybe wait for a response from @lucksus?
They also might want #2491 to land first so that they have a version that works for them without the serialized output changes.

@bakaq
Copy link
Contributor Author

bakaq commented Aug 26, 2024

(this currently comes back as {'X': {'variable': '_A'}})

This is correct! clpz:(X in 1..5) is a residual goal which is out of scope for this. In the way I implemented this it would go in a field adjacent to bindings (that's why there is that level of indirection):

{
  "bindings": { "Var": "binding" },
  "residual": [ "clpz:(X in 1..5) would go here" ]
}

If you really want to get the residual goals, see copy_term/3 (to get residual goals of attributed variables) and call_residue_vars/2 (to get which variables where attributed when you run a goal) to turn them into answer substitutions that would appear in the bindings field here. Those 2 predicates are precisely how the toplevel gets the residual goals by the way.

Serialization format

One other observation is that it would be significantly more convenient for many consumer languages if all of the return values were objects that have some sort of "type" tag.

Yeah, I thought about that in my original proposal. @Skgland pointed in this direction of using JSON types where possible, which I think is very nice, but does indeed make it harder to use by not having an uniform representation for all terms. This can be easily changed, and it would be best to do this in this PR, so I would like to hear your opinions. Let's review our options:

1: Using JSON types where possible

// 10
10
// Use:
// switch typeof(term) {
//   int => ...,
//   obj => if term.has_field(...)
//     ...
//   else if term.has_field(...)
//     ...
//   end,
// }

This is what I currently do in this PR, falling back to externally tagged when there isn't a representation, and doing compound terms untagged

  • Pros:
    • Smaller serialized text.
    • Mostly preserves semantics.
  • Cons:
    • Non-uniform representation of terms.

2: Externally tagged

// 10
{ "integer": 10 }
// a(1,2)
{ "compound": { "functor": "a", "args": [ { "integer": 1 }, { "integer": 2 } ] } }
// Use:
// if term.has_field("integer")
//   ...
// else if term.has_field("compound")
//   ...
// end

This is closer to my initial proposal (it had the same fallbacks as above), and the serde default for enums.

  • Pros:
    • Uniform representation of terms.
  • Cons:
    • Larger serialized text.
    • Non-uniform checking of fields to decide what is the type of the term

2.1 Untagged compound terms

// a(1,2)
{ "functor": "a", "args": [ { "integer": 1 }, { "integer": 2 } ] }

This gets rid of a layer of indirection at no downsides (we can use the functor field existing as the type tag), so we should probably do this if we go with JSON types or external tagging.

3: Adjacently tagged

// 10
{ "type": "integer", "value": 10 }
// a(1,2)
{
  "type": "compound",
  "value": {
    "functor": "a",
    "args": [
      { "type": "integer", "val": 1 },
      { "type": "integer", "val": 2 }
    ]
  }
}
// Use:
// switch term["type"] {
//   "integer" => ...,
//   "compound" => ...,
//   ...
// }

This is basically what @jjtolton proposed (plus internally tagged compound terms, see bellow). This is also the JSON equivalent of a clean representation in Prolog, so that's nice.

  • Pros:
    • Uniform representation of terms.
    • Uniform checking of fields to decide what is the type of the term
  • Cons:
    • Much larger serialized text.

3.1 Internally tagged compound terms

// a(1,2)
{
  "type": "compound",
  "functor": "a",
  "args": [
    { "type": "integer", "val": 1 },
    { "type": "integer", "val": 2 }
  ]
}

This only really makes sense for compound terms, but gets rid of a layer of indirection with no downsides, so we should do this if we go with adjacent tagging for the rest.

Conclusion

I think I would like either full JSON types or full adjacent/internal tagging, and I'm leaning more to the latter. External tagging seems like the worst of both worlds. Opinions?

See also

@Skgland
Copy link
Contributor

Skgland commented Aug 26, 2024

What about a mixture of 1 and 3.1? Using JSON types where they make sense and having an internal type tag where we need to disambiguate.

// 10
10
// Use:
// switch typeof(term) {
//   int => ...,
//   string => ...,
//   list => ...,
//   obj => switch term["type"] {
//     "functor" => ...,
//     "atom"    => ...,
//     ⋮
//   },
// }

Pros:

  • Smaller serialized text.
  • Mostly preserves semantics.
  • only need to check the value of a known field when we fallback to object

Cons:

  • Non-uniform representation of terms.

@jjtolton
Copy link

If you really want to get the residual goals, see copy_term/3 (to get residual goals of attributed variables) and call_residue_vars/2 (to get which variables where attributed when you run a goal) to turn them into answer substitutions that would appear in the bindings field here. Those 2 predicates are precisely how the toplevel gets the residual goals by the way.

Could you possible provide an example? 😅 I have heard what you said repeated before but I haven't figured out how to do it or have seen an example, but that would be clear enough for my tests.

@jjtolton
Copy link

jjtolton commented Aug 26, 2024

What about a mixture of 1 and 3.1? Using JSON types where they make sense and having an internal type tag where we need to disambiguate.

Would there ever be a type other than "compound" in internally tagged? 🤔 Because the hashmap by itself would signify compound.

Edit:

Oh, after reading internally tagged this seems like it would be very prone to breaking client libraries. Any time we change the internal representation, the output should change. I don't see any reason why clients should need to care about the rust implementation of the Prolog code, unless it was made to align very closely with Prolog terminology.

@bakaq
Copy link
Contributor Author

bakaq commented Aug 26, 2024

Would there ever be a type other than "compound" in internally tagged? 🤔 Because the hashmap by itself would signify compound.

How would you differentiate atom from string? How would you deal with arbitrary precision integers? Those are places where I think we need an object to represent it.

Could you possible provide an example? 😅 I have heard what you said repeated before but I haven't figured out how to do it or have seen an example, but that would be clear enough for my tests.

?- call_residue_vars(X in 1..10, ResVars), copy_term(ResVars, ResVars, ResGoals).
   ResVars = [X], ResGoals = [clpz:(X in 1..10)], clpz:(X in 1..10).

In this case call_residue_vars/2 is unnecessary because you know only X has attributes, but it can get variables that aren't in the query but are still attributed for some reason.

What about a mixture of 1 and 3.1? Using JSON types where they make sense and having an internal type tag where we need to disambiguate.

This may be a good way to do it, I also like it.

@Skgland
Copy link
Contributor

Skgland commented Aug 26, 2024

What about a mixture of 1 and 3.1? Using JSON types where they make sense and having an internal type tag where we need to disambiguate.

Would there ever be a type other than "compound" in internally tagged? 🤔 Because the hashmap by itself would signify compound.

Value currently differentiates between

  • Integer
  • Rational
  • Float
  • Atom
  • String
  • List
  • Struct
  • Var

Internally tagged would be

  • Integer out of range for u64/i64 (limit of serde_json) with the value being a string containing the integer value
  • Rational
  • Atom, to differentiate from String, except true, false and []
  • Variable to differentiate from String
  • Struct

Edit:

Oh, after reading internally tagged this seems like it would be very prone to breaking client libraries. Any time we change the internal representation, the output should change. I don't see any reason why clients should need to care about the rust implementation of the Prolog code, unless it was made to align very closely with Prolog terminology.

We will probably need to implement the serialization manually anyways instead of deriving it. So how it is internally represented doesn't necessarily match the json serialization.

@jjtolton
Copy link

Would there ever be a type other than "compound" in internally tagged? 🤔 Because the hashmap by itself would signify compound.

How would you differentiate atom from string? How would you deal with arbitrary precision integers? Those are places where I think we need an object to represent it.

Sorry, I hadn't read the documentation on what the definition of "internally tagged" mean. I understand now.

I personally think if we do it right, from the consumer perspective, internal and external tagging should appear identical. There should be no implementation specific terminology related to Scryer internals unless it is synonymous with a Prolog data concept. So, internal/external I think both accomplish the same thing, from what I'm seeing, if we do it right.

// 10
10
// Use:
// switch typeof(term) {
//   int => ...,
//   string => ...,
//   list => ...,
//   obj => switch term["type"] {
//     "functor" => ...,
//     "atom"    => ...,
//     ⋮
//   },
// }

I think this is a fine compromise. I would be very curious if the overhead of serialization or the overhead of branching conditionals would be better for performance.

One thing to keep in mind is that this branching conditional would have to be called recursively over nested data structures (such as list), so even if it were a list of all integers, if would have to be processed as

# pseudocode

def process_term(term):
    if isinstance(term, int):
       ...
    if isinstance(term, dict):
       ...
   if isinstance(term, list):
      for t in list:
          process_term(t)

because the consumer would not necessarily know a priori that the list was composed entirely of integers or functors.

For dynamically typed languages, this would not be much of an issue. I'm not sure how annoying this would be for statically typed languages to recursively process mixed type collections.

So I think two good rules should be:

  1. Whatever we produce as an output, we should be prepared to accept as an input (dogfooding)
  2. The output should express Prolog data concepts, not Scryer/rust/implementation specific concepts.

@Skgland
Copy link
Contributor

Skgland commented Aug 26, 2024

For dynamically typed languages, this would not be much of an issue. I'm not sure how annoying this would be for statically typed languages to recursively process mixed type collections.

I would expect that they would need to handle mixed lists anyways as neither prolog nor json enforce the list content to be heterogeneous.

So I think two good rules should be:

  1. Whatever we produce as an output, we should be prepared to accept as an input (dogfooding)

Opened #2505 with an work-in-progress deserialize impl based on the current state of this PR.

  1. The output should express Prolog data concepts, not Scryer/rust/implementation specific concepts.

Given that this is for interop with other languages simplifying common prolog constructs into a form that is "pre-assembled" to match common language constructs should make it easier to work with compared to a canonical form.
I would expect it to be easier to turn the "pre-assempled" form into a canonical form if one needs it than to
parse the canonical form into an assembled form.

@jjtolton
Copy link

jjtolton commented Aug 26, 2024

Given that this is for interop with other languages simplifying common prolog constructs into a form that is "pre-assembled" to match common language constructs should make it easier to work with compared to a canonical form. I would expect it to be easier to turn the "pre-assempled" form into a canonical form if one needs it than to parse the canonical form into an assembled form.

Agreed.. I think a preassembled literal list is much better than

{"type: "functor", "functor": ".", "args: [{"type": "variable", "variable": "_A"}, {"type": "functor", "functor": ".", "args": [... 

and also more ideal than

{"type": "internal.foo.something", "functor": "f", "args": [...]}

@triska
Copy link
Contributor

triska commented Aug 26, 2024

With "type" functor, do you mean a compound term?

@jjtolton
Copy link

With "type" functor, do you mean a compound term?

I would venture that anything I say related to Prolog is closer to babbling than meaningful speech at speech at this point. 😁 Hopefully the grownups can be in charge of proper terminology!

@hurufu
Copy link
Contributor

hurufu commented Aug 27, 2024

In order to make sure that we avoid any pitfalls with JSON:

Numbers aren't well defined in the JSON specification and basically every implementation that consumes JSON can interpret it as it wishes. It is often just a double precision float (without NaN and ±∞) so there is no universal way to distinguish between integer and float. This can bite if you will forward answer JSON to other runtimes.

Basically JSON numbers are safe to use only when you don't care if it is an integer or float.

@bakaq
Copy link
Contributor Author

bakaq commented Aug 27, 2024

It is often just a double precision float (without NaN and ±∞) so there is no universal way to distinguish between integer and float.

Maybe this means that we should always represent an integer as a string.

@hurufu
Copy link
Contributor

hurufu commented Aug 27, 2024

It is often just a double precision float (without NaN and ±∞) so there is no universal way to distinguish between integer and float.

Maybe this means that we should always represent an integer as a string.

Prolog also silently switches between integers and floats, what makes this distinction kind of artificial. I don't know what is the best way out of this. Maybe we can ignore that distinction, or use some type hints.

@jjtolton
Copy link

It is often just a double precision float (without NaN and ±∞) so there is no universal way to distinguish between integer and float.

Maybe this means that we should always represent an integer as a string.

This is effectively the way Python handles large floats. Clojure will often handle floats as fractions unless specifically coerced, so while unconventional, we could return floats as integer ratios rather than floating point.

Ironically, the more we explore and refine the JSON API, the more I am convinced that @bakaq 's original suggestion of C-API makes more sense 🤣

On the other hand, the exploration of the serialization is reaping many benefits. I think the tagging taxonomy by itself will be a very valuable educational tool for folks curious about learning Prolog.

@jjtolton
Copy link

jjtolton commented Aug 27, 2024

It is often just a double precision float (without NaN and ±∞) so there is no universal way to distinguish between integer and float.

Maybe this means that we should always represent an integer as a string.

on the other hand if we represent anything as a string besides a string, we have to bag and tag it (wrap it in an object with a type attribute). First of all, how would we differentiate between 5 and "5"? Asking users to perform regex would not be very nice.

@bakaq
Copy link
Contributor Author

bakaq commented Aug 27, 2024

on the other hand if we represent anything as a string besides a string, we have to bag and tag it

Yeah, that would mean integers would always be tagged, even if they are small:

// 10
{ "type": "integer", "val": "10" }
// "10"
"10"

This has the benefit that integer representation would be uniform and so there would be no need for the complicated "integer if small string if big" fallback (this would also apply to rational components), but also means that we always shift the burden of parsing the integer to the user which is a bit unfortunate.

@Skgland
Copy link
Contributor

Skgland commented Aug 27, 2024

on the other hand if we represent anything as a string besides a string, we have to bag and tag it

Yeah, that would mean integers would always be tagged, even if they are small:

// 10
{ "type": "integer", "val": "10" }
// "10"
"10"

This has the benefit that integer representation would be uniform and so there would be no need for the complicated "integer if small string if big" fallback (this would also apply to rational components), but also means that we always shift the burden of parsing the integer to the user which is a bit unfortunate.

I would assume the common case to be small integers that fit into a languages normal integer types (for rust i8/u8 upto i128/u128) which would be easier if it was encoded as an int.
Also in various languages without arbitrarily sized integers one would need to switch to special BigInteger types anyways when working with huge integer values and transforming a primitive into type to a BitInteger is usually sufficiently easy.

@jjtolton
Copy link

jjtolton commented Aug 27, 2024

I suppose it's worth asking the question what is the intended purpose -- is this for performance, adoption, or something else?

From a performance perspective, JSON is probably the wrong approach anyway. An equivalent C-API would probably be superior. From an adoption standpoint, I feel like a consistent API would probably make the most sense, and objects have the added benefit of being future-extensible.

Also, is it worth making the JSON API congruent with the C API?

Now, my assumption is that in a C-API, when you do something like,

c_scryer_query_next_step, you would get back a pointer. Then you would probably do something like, c_scryer_query_data_type and it would probably give you some kind of integer back representing the type of data structure, and then you would need to do something like c_scryer_query_value(code, data_ptr) or some specific API/ABI to get the particular value.

This has the benefit that integer representation would be uniform and so there would be no need for the complicated "integer if small string if big" fallback (this would also apply to rational components), but also means that we always shift the burden of parsing the integer to the user which is a bit unfortunate.

I would assume the common case to be small integers that fit into a languages normal integer types (for rust i8/u8 upto i128/u128) which would be easier if it was encoded as an int. Also in various languages without arbitrarily sized integers one would need to switch to special BigInteger types anyways when working with huge integer values and transforming a primitive into type to a BitInteger is usually sufficiently easy.

If there is any type coercion required, we could always wrap something that was BigInt, and leave a regular integer as a primitive type.

Anyway, I don't feel particularly strongly about it, except to say that I think for the serialization we should probably be leaning towards adoption over performance, and that the JSON API should probably be as similar as we can make it to the eventual C-API. I'm fine with a combination of primitive types and tagged types where required for disambiguation.

@Skgland
Copy link
Contributor

Skgland commented Aug 27, 2024

c_scryer_query_next_step, you would get back a pointer. Then you would probably do something like, c_scryer_query_data_type and it would probably give you some kind of integer back representing the type of data structure, and then you would need to do something like c_scryer_query_value(code, data_ptr) or some specific API/ABI to get the particular value.

I would rather expect the return type to be a tagged union or pointer to a tagged union, so no need for an extra method to get the type.

i.e.

struct Value {
  tag: enum {
     Int,
     Float,
     String
     Compound
     ...
  },
  value: union {
    i: int64_t,
    d: double,
    s: char*,
    c: struct {
      functor: char*,
      argc: size_t,
      argv: Value*,
    }
    ...
  }
}

or the rust equivalent

#[repr(C, u8)]
enum Value {
  Int(i64),
  Float(f64),
  String(*const char),
  Compound{ functor: *const char, argc: usize, argv: *const Value }
  ...
}

@bakaq
Copy link
Contributor Author

bakaq commented Aug 29, 2024

I would rather expect the return type to be a tagged union or pointer to a tagged union, so no need for an extra method to get the type.

There are some reasons to not do this actually. We may want to change the internal representation of Value, and add variants like unums in the future. I really think that in the C API having a function that takes an opaque pointer to Value and returns the type tag is better than having a predefined layout. We don't have pattern matching in C anyway, so there is not much lost there in terms of ergonomics.

@jjtolton
Copy link

There are some reasons to not do this actually.

I am open to it, but without some kind of type dispatch integer I would be unsure how to write client code to traverse or serialize the answer or leaf answer. There might be a technique I haven't seen before, though.

@Skgland
Copy link
Contributor

Skgland commented Aug 29, 2024

Mh, I see. Part of that I think would already be solved by passing a pointer to the struct rather than the struct itself, but if we define a complete type in C, the C code could depend on the size and alignment of the struct and store it directly on the stack or in other types, to prevent that we need an incomplete type in C, but we can't define a struct with a known tag and union field that is also incomplete. I.e we don't have a #[non_exhaustive] equivalent in C, especially because we need to know the offset of the start of the struct to two fields not both of which can be at the start of the struct. So the offset of at least one of the fields depends on either the size or the alignment of the union type which might change if we add more variants.

Yeah, looks like we need to go with the functions for this to work as a dynamic library.

@Skgland
Copy link
Contributor

Skgland commented Aug 29, 2024

There are some reasons to not do this actually.

I am open to it, but without some kind of type dispatch integer I would be unsure how to write client code to traverse or serialize the answer or leaf answer. There might be a technique I haven't seen before, though.

I think the idea is something like

SCRYER_Machine *m = scryer_machine_new();
SCRYER_QueryState *q = scryer_query(m, "A = 'test'.");
SCRYER_QueryAnswer *a = scryer_next_answer(*q);

switch (scryer_answer_kind(s)) {
  case SCRYER_ANSWER_TRUE:
  ...
  case SCRYER_ANSWER_FALSE:
  ...
  case SCRYER_ANSWER_MATCHES: {
    SCRYER_AnswerMatches *am = scryer_get_answer_matches(a);
    SCRYER_AnswerBindings *b = scryer_get_answer_matches_bindings(am);
    size_t bindings_count = scryer_get_answer_matches_bindings_count(am);
    for (size_t i = 0; i < bindings_count; i++) {
      SCRYER_Term *t = scryer_get_bound_term(b);
      switch (scryer_get_term_kind(t)) {
         case SCRYER_TERMKIND_INT:
         ...
         case SCRYER_TERMKIND_FLOAT:
         ...
         case SCRYER_TERMKIND_RATIONAL:
         ...
         case SCRYER_TERMKIND_STRING:
         ...
         case SCRYER_TERMKIND_ATOM: {
            char *name = scryer_get_atom_term_string(t);
            ...
         }
      }
    }
    
    ...
  }    
}

@jjtolton
Copy link

There are some reasons to not do this actually.

I am open to it, but without some kind of type dispatch integer I would be unsure how to write client code to traverse or serialize the answer or leaf answer. There might be a technique I haven't seen before, though.

I think the idea is something like

SCRYER_Machine *m = scryer_machine_new();

SCRYER_QueryState *q = scryer_query(m, "A = 'test'.");

SCRYER_QueryAnswer *a = scryer_next_answer(*q);



switch (scryer_answer_kind(s)) {

  case SCRYER_ANSWER_TRUE:

  ...

  case SCRYER_ANSWER_FALSE:

  ...

  case SCRYER_ANSWER_MATCHES: {

    SCRYER_AnswerMatches *am = scryer_get_answer_matches(a);

    SCRYER_AnswerBindings *b = scryer_get_answer_matches_bindings(am);

    size_t bindings_count = scryer_get_answer_matches_bindings_count(am);

    for (size_t i = 0; i < bindings_count; i++) {

      SCRYER_Term *t = scryer_get_bound_term(b);

      switch (scryer_get_term_kind(t)) {

         case SCRYER_TERMKIND_INT:

         ...

         case SCRYER_TERMKIND_FLOAT:

         ...

         case SCRYER_TERMKIND_RATIONAL:

         ...

         case SCRYER_TERMKIND_STRING:

         ...

         case SCRYER_TERMKIND_ATOM: {

            char *name = scryer_get_atom_term_string(t);

            ...

         }

      }

    }

    

    ...

  }    

}

Ok perfect. Yes I think that's very reasonable.

@bakaq bakaq marked this pull request as draft September 2, 2024 16:46
@bakaq bakaq mentioned this pull request Oct 13, 2024
@bakaq
Copy link
Contributor Author

bakaq commented Dec 14, 2024

Continued in #2707.

@bakaq bakaq closed this Dec 14, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants