Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix #156: tuples encoding at depth 0 #157

Open
wants to merge 1 commit into
base: 0.3
Choose a base branch
from
Open

fix #156: tuples encoding at depth 0 #157

wants to merge 1 commit into from

Conversation

Speedy37
Copy link
Contributor

No description provided.

@Speedy37
Copy link
Contributor Author

Well the binding tester failed, so this might be breaking current code.

@Aioxas
Copy link

Aioxas commented Dec 4, 2019

Too bad that it would be too much work having to maintain two branches of the same kind that only diverge because of a single commit

@Speedy37
Copy link
Contributor Author

Speedy37 commented Dec 4, 2019

I haven't yet had time to figure out what this commit breaks to cause the binding tester to fail.

@Aioxas
Copy link

Aioxas commented Dec 6, 2019

So bindingtester, on one of the pushes, does a TupleUnpack instr.code. TupleUnpack before this commit would push 6 items into the stack, in this commit, it pushes a single Tuple(Vec<Element>(Len:6)) item into the stack. The stack is not expecting such a low number and it fizzles out. I am not sure if bindingtester is supposed to be changed, but here's what I changed on TupleUnpack, it passes Bindingtester + cargo test.

Before:

TupleUnpack => {
                let data: Vec<u8> = self.pop_item();
                let mut data = data.as_slice();

                while !data.is_empty() {
                    let (val, offset): (Element, _) = Decode::decode_from(data).unwrap();
                    let bytes = val.to_vec();
                    self.push_item(number, &bytes);
                    data = &data[offset..];
                }
            }

After:

TupleUnpack => {
                let data: Vec<u8> = self.pop_item();
                let mut data = data.as_slice();

                while !data.is_empty() {
                    let (val, offset): (Element, _) = Decode::decode_from(data).unwrap();
                    match val {
                        Element::Tuple(v) => {
                            v.iter().for_each(|va| self.push_item(number, &va.to_vec()))
                        }
                        _ => {
                            let bytes = val.to_vec();
                            self.push_item(number, &bytes);
                        }
                    }
                    data = &data[offset..];
                }
            }

Again, if this is wrong, then I will continue to look into it

@Aioxas
Copy link

Aioxas commented Dec 16, 2019

Any updates on this? Hope everything's doing well especially in this season

@Speedy37
Copy link
Contributor Author

Speedy37 commented Dec 17, 2019

The change you made is correct. I also can't figure out a way to write this commit without a breaking change.

@Speedy37
Copy link
Contributor Author

Well maybe decode_from should not try to decode the whole set of bytes for some types. (ie. Element)

Your code could be simplified to:

TupleUnpack => {
                let data: Vec<u8> = self.pop_item();
                let mut data = data.as_slice();

               let vec: Vec<Element> = Decode::decode(data).unwrap();
               for v in vec {
                   self.push_item(number, &v.to_vec())
               }
}

@Aioxas
Copy link

Aioxas commented Dec 17, 2019

That would work if we differentiate the behavior between decode and decode_from. If it's not and we attempt to simplify the code, it will cause errors to pop up.

@Speedy37
Copy link
Contributor Author

Differentiating between decode and decode_from will minimize the impact but it's still a breaking change due to the fact data are now encoded according to the official spec (decoding allows both the old and the official spec).

Maybe a feature flag to enable this set of fixes is better.

To be true, I just want to provide peoples a 0.3 version that works but I don't want to spent a lot of time on it. The 0.4 branch is what matter the most to me (async, safety, no more clone() everywhere).
You seem to be a 0.3 version user, so your opinion matter more than mine on this.

@Aioxas
Copy link

Aioxas commented Dec 18, 2019

A flag would work for sure.

To be honest, the only reason why I hang onto this branch is because there are some packages that I use as dependencies that haven't released their async/await branch yet. Once they do that I'll be rewriting to comply with futures 0.3, async/await and all that jazz. But for now what I aim for is to have data encoded and decoded the same way in both branches so that when I rewrite for the other one I don't have breakage due to how we encoded or decoded it thru 0.3

@Aioxas
Copy link

Aioxas commented Jan 15, 2020

So hypothetical case: Being on 0.3 and moving to async/await, what will that mean for the data that was previously encoded in 0.3 being decoded in async/await? will it be 100% compatible or do I have to do something to make it compatible so that when I move over I won't lose data due to the decoding algorithm changing.

@Speedy37
Copy link
Contributor Author

The 0.4 parser will decode what was (a, b) as ((a, b),).
In the 0.3 branch depending on how you manipulate tuple, they might be nested at level 0 or non nested. For example, 0.3 subspace encode a non nested tuple in the database.

It might be possible to build a compatibility layer that fallback to decoding the nested tuple as if it was non nested.

@Aioxas
Copy link

Aioxas commented Jan 16, 2020

I'll open a new issue for that then and leave this one on hold until a consensus is reached. Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants