Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Protobuf Transaction Signing #6078

Closed
4 tasks
aaronc opened this issue Apr 26, 2020 · 83 comments
Closed
4 tasks

Protobuf Transaction Signing #6078

aaronc opened this issue Apr 26, 2020 · 83 comments

Comments

@aaronc
Copy link
Member

aaronc commented Apr 26, 2020

Problem Definition

The Cosmos SDK has historically used Amino JSON for signing of transactions whereas Amino binary is used for encoding.
During the SDK's migration to protobuf, we had made the preliminary decision to use a canonical protobuf JSON encoding for signing as described in https://github.com/regen-network/canonical-proto3.

As a consequence of #6030, the Cosmos SDK is moving in the direction of using protobuf's Any type for the transaction encoding and signing format. In this discussion, a number of participants have asked that we revisit the transaction signing discussion. The options that have been discussed/are available for consideration are outlined below.

It should be noted that it is theoretically possible to support more than one of the following options via an enum flag on the signature. Whether that should or should not be done is a related question.

Proposals

Feel free to suggest updates to these alternatives in the comments

(1) Protobuf canonical JSON

The official proto3 spec defines a canonical mapping to JSON. This is not really deterministic, however, so we define a canonical encoding on top of that using https://gibson042.github.io/canonicaljson-spec/.

Pros:

  • By including field names, JSON is more self-describing than proto and is somewhat human-readable
  • Can prevent certain user errors made when manually copying proto definitions or if proto definitions are changed between versions (shouldn't be done)
  • Relatively easy migration for existing signing tools like the Ledger app
  • Causes transaction verifiers to reject unknown fields which is generally correct

Cons:

  • Converting to another format for signing may introduce transaction malleability vulnerabilities - i.e when different encoding representations map to a single signing representation. This is known to exist, at least for Timestamp, although that malleability is likely un-exploitable.
  • Requires clients to have proto JSON and canonical JSON in addition to protobuf. There are very few implementations of canonical JSON (only go as far as we know) and some protobuf implementations don't implement proto JSON correctly (none of the Rust ones do for example)
  • Doesn't bech32 encode addresses, pubkeys, etc. like Amino JSON
  • By default, encodes fields in lowerCamelCase, i.e. some_field becomes someField
  • Special treatment for well known types (e.g. Timestamp, Any, Duration, Struct) not necessarily supported well in existing libraries.
  • And kind of renaming in field names leads to breaking changes of the message

(2) Protobuf canonical binary

This involves re-encoding the protobuf used for transaction encoding canonically for signing - meaning that fields must be ordered and defaults omitted. This is how Weave does signing.

Pros:

  • Simpler for clients to implement, most protobuf implementations serialize things canonically
  • Canonicalization could prevent a certain small class of user errors (if the wrong protobuf definition was used to encode a message)
  • Renaming fields is a non-breaking change (as long as semantics don't change)
  • Causes transaction verifiers to reject unknown fields which is generally correct

Cons:

  • Introduces a subtle transaction malleability vulnerability if modules attempt to use proto2 semantics - i.e. interpret null/omitted differently from default/zero
  • Not all protobuf implementations serialize things canonically and may require an additional canonicalization layer
  • Doesn't prevent user errors made to reordering fields when copying or modifying proto files (like JSON does), but this should be much less of an issue using Any (because the full type URL is included and oneof's are not copied manually) and breaking change checkers like Buf/Prototool
  • Hard to implement in the Ledger app for many/arbitrary message types

(3) Protobuf binary as encoded in transaction

This simply uses the protobuf encoding as broadcast in the transaction. This becomes a little easier for both signature creation and verification because of Any (although it could be done without Any too). Because Any wraps the raw bytes of the sdk.Msg, it is pretty easy to use these same exact bytes for signing and verification and only require that SignDoc itself is encoded canonically, rather than every Msg 

Pros:

  • Simpler for clients to implement than even (2). All implementations should pretty much just work
  • Is not vulnerable to proto2 semantics transaction malleability as (2)

Cons:

  • Same as (2) in not preventing user errors related to manually copying or tampering with proto definitions, although less of an issue with Any
  • Does not cause transaction verifiers to reject unknown fields (unlike 1, 2 & 4) which may lead to unexpected behavior for comments

(4) Amino JSON

This is how tx's are signed currently. The reason this is under consideration is because breaking Amino JSON signing would break many clients especially the ledger app. Transactions could still be encoded with protobuf and the /tx/encode endpoint could accept Amino JSON and return protobuf rather than amino binary for tx broadcasting - some upfront work would be required to enable this but it is possible

Pros:

  • Doesn't break existing clients right away
  • Doesn't introduce new transaction malleability issues that weren't already there
  • Bech32 encodes addresses, pubkeys, etc. for better readability than (1)
  • Causes transaction verifiers to reject unknown fields which is generally correct

Cons:

  • Delays the deployment of protobuf signing, although Amino JSON signing could be enabled via a flag on Signature so maybe this is a non-issue
  • Requires an additional layer on top of protobuf with information that isn't conveyed by the protobuf schema and so far lacks extensive library support

(5) Custom Proto JSON

Extend (1) to support custom encoding of certain types like bech32 addresses

Pros:

  • The same benefits as Amino JSON for signing with embedded devices but derived from .proto files, i.e. human readable bech32 addresses and keys
  • An easier migration path than (1)
  • Does not try to re-use an encoding that was never designed to be deterministic
    (multiple signing algorithms?)

Cons:

  • Requires a possibly even more complex custom JSON layer on top of protobuf plus extensions to protobuf field to indicate custom formatting like bech32
  • Every signature verifier (on and off-chain) needs to maintain support all options (forever)

Related Question: Should we allow multiple signing algorithms?

Theoretically we can allow clients to use multiple signing algorithms and indicate which one they used as an enum flag on the Signature struct.

Pros:

  • allows clients to sign with the method they feel are able to use and feel most secure with
  • allows Amino JSON compatibility without delaying protobuf signing support

Cons:

  • increases the number of vulnerability paths that need auditing
  • increases maintenance requirements for supporting multiple paths* leaves too much responsibility to clients rather than making an opinionated, well-informed decision

For Admin Use

  • Not duplicate issue
  • Appropriate labels applied
  • Appropriate contributors tagged
  • Contributor assigned/self-assigned
@aaronc
Copy link
Member Author

aaronc commented Apr 26, 2020

I do want to say that after considering the above, my personal preference is to go with approach (3) (signing the raw binary) as default and to allow (4) Amino JSON for a limited period of time while clients transition.

(3) seems to offer the simplest UX for clients as well as the least number of malleability issues. Given that Any should minimize user errors from manually manipulating .proto files, I don't see that as as big of an issue as with Amino or the proto oneof approach.

Supporting (4) temporarily through a flag doesn't seem to introduce any new issues that aren't already here, and has the big benefit of not disrupting wallets, exchanges, etc. that don't have time to transition overnight.

@zmanian
Copy link
Member

zmanian commented Apr 26, 2020

Thank you for writing this up @aaronc !

Here are the problems I see with 3.
Protobuf deserializing in embedded/wasm environments is extremely difficult. This is why we are actually avoiding protobuf in Armistice.

This basically means that you can't decode the transactions inside your Trusted Computing Base and you need to extend your trust boundaries around signing to enclude another system that will deserialize your proto bufs for you. This is going to be a disaster for practical security.

Strongly prefer 4 and 1. Because AMINO Json serialization in good enough form is pretty easy to implement and widely available. proto3 cannonical json actually looks okay but not really sure how wide spread support for this is. I think prost doesn't support this?

@aaronc
Copy link
Member Author

aaronc commented Apr 26, 2020

@zmanian why is protobuf deserialization in embedded environments so hard? Have you looked at stuff like https://github.com/nanopb/nanopb? Even if that didn't work, I don't think a hand decoder should be too hard. The JSON stuff in the ledger app appears to be mostly hand coded.

I'm fine with enabling Amino JSON for compatibility, but we should have an alternative going forward that just requires .proto files.

@tarcieri
Copy link

tarcieri commented Apr 26, 2020

Regarding this:

Theoretically we can allow clients to use multiple signing algorithms and indicate which one they used as an enum flag on the Signature struct.

...it's reminiscent of the alg field in JOSE JWT/JWS, which has been implicated in numerous security vulnerabilities (e.g. this recent one), namely via an ongoing history of implementation bugs which leverage an attacker-controlled alg into tricking a verifier into using the wrong algorithm.

An alternative is to make alg a property of public keys instead of signatures, ala X.509 Subject Public Key Info (SPKI).

This gives you a strong binding between an algorithm and a public key, known a priori to the verifier, and with that, a signature can name the "SPKI" (or SPKI hash)

@zmanian
Copy link
Member

zmanian commented Apr 26, 2020

Some discussion here
https://www.reddit.com/r/rust/comments/aequik/is_there_a_no_std_compatible_protobuf_library_out/

My guess would be that nanopb would be too big for the ledger.

Adding a C deserialization library seems very much against our goals of minimizing C in the TCB.

@zmanian
Copy link
Member

zmanian commented Apr 26, 2020

Also pretty strongly support the idea of binding the serialization system to the public key to support forward migration!

@tarcieri
Copy link

tarcieri commented Apr 26, 2020

@aaronc

why is protobuf deserialization in embedded environments so hard?

The Protobuf Schema language lacks descriptions for the (maximum) size of variable-length fields.

This means it isn't possible to codegen a struct with a fixed-sized APDU-like structure which is typically used in heapless (e.g. microcontroller) environments.

Libraries which do support Protobufs in embedded environments therefore tend to work a level of abstraction below a typical Protobuf library which generates message types directly from the schema. This is also bad from a code size perspective, because it punts all of the work of size-checking the underlying fields to the end user.

@aaronc
Copy link
Member Author

aaronc commented Apr 26, 2020

...it's reminiscent of the alg field in JOSE JWT/JWS, which has been implicated in numerous security vulnerabilities (e.g. this recent one), namely via an ongoing history of implementation bugs which leverage an attacker-controlled alg into tricking a verifier into using the wrong algorithm.

Okay, so that's definitely a consideration for supporting multiple algorithms. I do want to note that we're talking about supporting maybe 2-3 algorithms and none wouldn't be one of them. JWT seems to support ~20.

Also pretty strongly support the idea of binding the serialization system to the public key to support forward migration!

Would there be an option to change the binding to a newer system?

https://www.reddit.com/r/rust/comments/aequik/is_there_a_no_std_compatible_protobuf_library_out/

In that thread it seems there are at least 2 users who hacked together a way to do it. So definitely possibly, just not standardized in a library yet.

The Protobuf Schema language lacks descriptions for the (maximum) size of variable-length fields.

This means it isn't possible to codegen a struct with a fixed-sized APDU-like structure which is typically used in heapless (e.g. microcontroller) environments.

How is this any different from JSON? Strings/byte arrays have no max length in JSON either. If you had fixed sized arrays, you would need to truncate strings with either json or pb. But I'm not even sure why you'd need to do that sort of copying into a struct. From my quick glance at the ledger app source code, it seems like one of the key things it's trying to do is display info in fields to users. You could do the same thing in protobuf with zero allocation. Just iteratively navigate through the message like you would with JSON and keep track of what level of nesting you're at (which should be possible with a fixed depth). Strings should be easier to extract from protobuf because you don't need to copy to remove escape chars. And addresses need a small array to convert from bytes to bech32.

Anyway, I definitely can understand the concern of not wanting to rewrite a bunch of firmware, thus my support for keeping an Amino JSON option. But these problems with protobuf to me seem solvable...

@tarcieri
Copy link

tarcieri commented Apr 27, 2020

How is this any different from JSON?

JSON and Protobufs are both problematic in this regard. There are some optimizations you can do for JSON as a proper context-free grammar, but ultimately on platforms with low memory the main one is ensuring either JSON or Protobufs have a well-known fixed-sized structure.

Just iteratively navigate through the message like you would with JSON and keep track of what level of nesting you're at

I've been working on this sort of pushdown automaton in Veriform, as it were.

For some extreme low-end embedded environments, even that sort of thing is too much (our target is a 500+MHz Cortex-A environment, whereas the problematic environments are much lower clocked Cortex-M ones with much smaller stacks)

@ethanfrey
Copy link
Contributor

I want to jump in with an argument against Amino JSON. Basically, if we keep using that, we will need to support that tooling on all platforms. Much less work than binary Amino, but over 2 years there has been amazing little work by the core team to port any of Amino, and assuming all this magically happens "from the community" is wishful thinking. If we need to stay with Amino JSON, then I would say that porting client side libraries in major languages (not just JS and Rust, but say Java/Kotlin and ObjC/Swift at the minimum, ideally python and some more) comes into scope as part of this migration. If we can use tooling that already works out of the box in all these languages, than that is much easier.

The strong valid criticism I see above is that it is hard to parse Protobuf in an HSM.

My guess would be that nanopb would be too big for the ledger.
Adding a C deserialization library seems very much against our goals of minimizing C in the TCB.

I encourage you to look at the Ledger app that Juan (same dev who wrote Cosmos Ledger app) wrote for IOV: https://github.com/iov-one/ledger-iov There isn't too much code there for the parsing. Actually grabbing a few fields from a predefined protobuf format is quite easy, and doesn't require parsing arbitrary structs into memory.

It parses the Protobuf signing format we use for transactions and displays it to the users. It didn't take him that long to do it (I believe less than the original Cosmos app). I just want to say that using protobuf is not some new and untested idea, but was used for over 2 years now and on a mainnet. That IOV has no clue about business and marketing doesn't mean the code there has no technical merit. I advise you to borrow liberally.

@aaronc
Copy link
Member Author

aaronc commented Apr 27, 2020

JSON and Protobufs are both problematic in this regard. There are some optimizations you can do for JSON as a proper context-free grammar, but ultimately on platforms with low memory the main one is ensuring either JSON or Protobufs have a well-known fixed-sized structure.

Okay, well sounds like this problem would only be solved by something like ASN.1 which isn't really an option. But either way sounds like we can actually deal with the memory problem with an event driven parser as opposed to decoding into a struct. So then the issue becomes code size, right?

Here I can see one potential argument in favor of JSON. JSON is self-describing so if you don't have the schema, you could still iteratively display each raw field on the JSON on a device like the ledger. To do that in protobuf, you would need to include the schema for every type or limit support to just a few types to limit code size. Is that one of the tradeoffs you're seeing @tarcieri ?

One point I will grant to Amino JSON is that it bech32 encodes addresses, pub keys, etc. Say you wanted to iteratively display every element of a JSON object in the ledger without the schema, with protobuf you would get base64 whenever you don't know the schema. Maybe that's not a hard problem to solve, but I do see it as a valid good thing about Amino JSON.

To do this with protobuf we would need a custom JSON serialization format which maybe indicated the bech32 type of bytes fields with an extension (i.e. bytes key = 1 [(cosmos_proto.bech32_type) = "valpub"]). That's maybe not a bad idea to include in the .proto files anyway, but as @ethanfrey noted the more custom work clients need to do the bigger the burden.

Anyway, I will include this as a new option (5) and update the pros and cons of the other options to reflect this discussion.

I do still think there is an elegance to approach (3) and if it did work for embedded devices, maybe following the example in https://github.com/iov-one/ledger-iov, that might make everyone's lives easier.

@zmanian
Copy link
Member

zmanian commented Apr 27, 2020

This self describing nature of Javascript allows the Cosmos app to work any number of chains out of the box.

@ethanfrey
Copy link
Contributor

This self describing nature of Javascript allows the Cosmos app to work any number of chains out of the box.

That and the fact that they all use a superset of the schema the ledger app understands.

It checked for eg. .msgs[0].type == "cosmos-sdk/send" and then .msgs[0].data.amount[0].amount for example. If we encoded it in JSON but with different keys, this wouldn't work.

Note that with option (3), protobuf is also self-describing to a degree. At least you can check the type they claim and will not mix up cosmos-sdk/send and cosmos-sdk/burn. It will not know how to display any type that it was not compiled for. But then again the JSON parser wouldn't either - it can just display the raw JSON to the user. Is that mode actually used (display raw JSON of sign bytes to end user). If this is a common use-case (pass raw bytes to the end user to interpret), then there is a big bonus for JSON. Otherwise, I don't see how self-describing helps an app that checks hardcoded fields - it just helps avoid mixups (like the Any type field does)

@iramiller
Copy link
Contributor

If this is a common use-case (pass raw bytes to the end user to interpret), then there is a big bonus for JSON.

I want to caution against weighting this case too highly in the context of a generic user interface. A user interface that is made up of simply printing out the json key/value pairs without understanding the underlying message format will yield a strictly poor user experience for most "end" users. I feel like there is an important distinction between the audience of developers using the system and the users the developers are intending to support. The developers should be familiar with Proto and the tooling required to deal with it ... so the Json step is strictly speaking an extra effort that may or may not support the needs of the users the developers are working for.

@tarcieri
Copy link

tarcieri commented Apr 27, 2020

@iramiller

I want to caution against weighting this case too highly in the context of a generic user interface. A user interface that is made up of simply printing out the json key/value pairs without understanding the underlying message format will yield a strictly poor user experience for most "end" users.

To further emphasize this, I believe Ledger has required moving from displaying raw JSON to a UI which extracts, displays, and confirms values in the message for exactly these reasons.

@aaronc

Okay, well sounds like this problem would only be solved by something like ASN.1 which isn't really an option.

For what it's worth, ASN.1 DER solves this problem poorly as well. Most embedded implementations of DER are actually BER parsers that don't even verify the BER is canonical (and therefore DER).

The most embedded friendly formats follow an "APDU"-like structure (i.e. fixed-sized fields everywhere). You can get similar properties out of either Protobufs or JSON if you ensure all of the fields in either a message are constant-length (by using e.g. fixed integer types and fixed-length bytes fields with Protobufs).

@aaronc
Copy link
Member Author

aaronc commented Apr 27, 2020

The most embedded friendly formats follow an "APDU"-like structure (i.e. fixed-sized fields everywhere). You can get similar properties out of either Protobufs or JSON if you ensure all of the fields in either a message are constant-length (by using e.g. fixed integer types and fixed-length bytes fields with Protobufs).

@tarcieri Would you consider Cap'n Proto embedded friendly? (Not that it's an option anytime soon...)

@tarcieri
Copy link

Not particularly. Cap'n Proto is significantly more complicated than Protobufs (see for example message segments and inter-segment pointers)

@webmaster128
Copy link
Member

webmaster128 commented Apr 27, 2020

Great stuff here, everyone. A few additional 👍 / 👎 for the main list that you are free to merge somehow:


(1) Protobuf canonical JSON

Pros:

  • Relatively easy migration for existing signing tools like the Ledger app

Cons:

  • Special treatment for well known types (e.g. Timestamp, Any, Duration, Struct) not necessarily supported well in existing libraries.
  • And kind of renaming in field names leads to breaking changes of the message

(2)

Pros:

  • Renaming fields is a non-breaking change (as long as semantics don't change)

Cons:

  • Hard to implement in the Ledger app for many/arbitrary message types

(3)

Cons:
- Impossible due to circular dependency: Protobuf binary as encoded in transaction contains signatures but we don't have the signature(s) yet.

(4)

Cons:

(5)

Pros:

  • Does not try to re-use an encoding that was never designed to be deterministic

(multiple signing algorithms?)

Cons:

  • Every signature verifier (on and off-chain) needs to maintain support all options (forever)

@zmanian if there was a JSON document that is signed, would you expect a JSON->proto conversion to be possible, i.e. you only operate on JSON and assume this can be translated back to the transaction format understood by Tendermint (proto binary). Or would it be sufficient to create a JSON document from a proto document (one way function proto->JSON), which is then sent to chain?

Amino allows two-way mappings, but this is significantly harder to get right than one way mappings.


While IOV's Ledger app is great for IOV's use cases, I think it is important to note that as of now it only supports a single message type (with ongoing work to add a handful more). Chain ID and address prefix are compile time constants with a just a boolean testnets/mainnet flag.

@aaronc
Copy link
Member Author

aaronc commented Apr 27, 2020

(3)

Cons:

  • Impossible due to circular dependency: Protobuf binary as encoded in transaction contains signatures but we don't have the signature(s) yet.

Not true. Maybe re-read how I framed it and look at how Any is encoded. I never suggested signing the transaction, just the SignDoc which can contain the exact same pre-encoded Any msg's as the transaction.

@webmaster128
Copy link
Member

Sorry @aaronc, you're completely right. What would probably help (at least for me) is to repeat in the top what we are talking about: the ecoding of the SignDoc structure from ADR 20 from master plus the oneof-to-Any change similar to #6081.

@zmanian
Copy link
Member

zmanian commented Apr 28, 2020

Just

To further emphasize this, I believe Ledger has required moving from displaying raw JSON to a UI which extracts, displays, and confirms values in the message for exactly these reasons.

My take here is that having schema aware signers should be an enhancement of the baseline signing experience.

Prior art in Ethereum, Bitcoin etc is that if your signer isn't aware of the schema you are signing then you are signing pretty much opaque bytes. By using json as the signing target, you get an enhanced experience if the signer is aware of the schema and fall back to something somewhat human readable.

I can imagine a format that is easier to implement in Rust and other languages than Amino JSON.
Like this is sort of awkward
https://github.com/iqlusioninc/deep_space/blob/develop/src/canonical_json.rs#L9-L34

But also aprprox 10 LOC to implement.

I'm generally in favor of 1 or 4.

@zmanian if there was a JSON document that is signed, would you expect a JSON->proto conversion to be possible, i.e. you only operate on JSON and assume this can be translated back to the transaction format understood by Tendermint (proto binary). Or would it be sufficient to create a JSON document from a proto document (one way function proto->JSON), which is then sent to chain?

Amino allows two-way mappings, but this is significantly harder to get right than one way mappings.

It's totally fine if you need a protobuf schema to turn the json into bytes on the wire.

The general pattern is that singers are resource constrained and difficult to update settings and serializers are much more flexible.

@webmaster128
Copy link
Member

webmaster128 commented Apr 28, 2020

It's totally fine if you need a protobuf schema to turn the json into bytes on the wire.

@zmanian okay, that's level 1 independence. Level 2: what if it was not possible at all to map back JSON->proto?. Instead you must use the proto doc from the composing environment plus the signature:

    p2p network            Composing environment      Signing environment
------------------        -----------------------    ---------------------
                          
                            unsigned tx proto  --------->  SignDoc (JSON)
                                     |                          |
                                     |                          | sign
                                     |                          |
                                     v                          v
 signed tx bytes  <----------  signed tx proto  <-----------  signature
                    serialize 

@aaronc
Copy link
Member Author

aaronc commented Apr 28, 2020

I'm generally in favor of 1 or 4.

So I'd like to take 4 (Amino JSON) off the table as a long term solution. Short-term, sure. But we you want something like Amino JSON, let's consider 1 or 5 where all of the information is in the .proto files.

@aaronc
Copy link
Member Author

aaronc commented Apr 28, 2020

@webmaster128 added your pros/cons to the main list (except the Amino JSON con which was already there worded differently)

@zmanian
Copy link
Member

zmanian commented Apr 28, 2020

I love this diagram

The composing environment is expected to know how to take an unsigned prototx and a signature and turn it into signed bytes..

The verifier is expected to know how take signed bytes, generate a SignDoc and verify a signature.

  chain/verifier               p2p network            Composing environment      Signing environment
-----------------  ------------------        -----------------------    ---------------------
                          
                                                                      unsigned tx proto  --------->  SignDoc (JSON)
       verifier                                                                           |                          |
              ^                                                                           |                          | sign
              |                                                                            |                          |
              |                                                                            v                          v
   SignDoc (JSON) <------ signed tx bytes  <----------  signed tx proto  <-----------  signature
                                                            serialize 

@webmaster128
Copy link
Member

webmaster128 commented Apr 28, 2020

I love this diagram

Glad to hear that.

Where I am heading to is: proto->JSON serialization is going to be non-trivial, but I am sure it can be done as described in (1), (4) or (5). The reverse operation (deserializing JSON->proto) however is specified very openly, allowing all kind of JSON variants that lead to the same proto document. This starts with allowing both numbers and strings to deserialize to int32, fixed32, uint32, int64, fixed64, uint64 ("Either numbers or strings are accepted.") and just gets more complicated when allowing different RFC3339 timezones as an input for proto's Timestamp ("Offsets other than "Z" are also accepted."; how to handle perfectly valid RFC3339 leap seconds?). I'm not saying multiple JSON represenations that decode to the same SignDoc are necessarily insecure. But if this mapping needs to be supported, we'll have to do much more work specifying all the edge cases. (I don't buy any getting feature X from library Y for free).

When the JSON representation is only used for signing, we lose the current flow of broadcasting signed JSON to the REST server, which is a good thing in my opinion. I believe a client (tx compose and breadcasting environment; no privkey here) should be able to operate on proto, given a Cosmos specific wrapper around a general purpose proto lib. But I want to make sure there is consensus on this.

@aaronc
Copy link
Member Author

aaronc commented Apr 28, 2020

I do want to re-iterate that there is something pretty elegant about (3) - just signing the raw binary.

All of the JSON solutions including the standards-based approach (1) require both a) a fair amount of additional client library support and b) substantial auditing to check for edge cases and malleability issues.

It seems that the biggest benefit of the JSON solutions is that we could just show raw JSON to users of the ledger if the ledger app doesn't have the full proto schema. But this convenience does come at a cost elsewhere.

With approach (3), you have both the least surface area for transaction malleability issues and the easiest implementation for composing and verification environments.

For hardware signing environments, there is going to be complexity whichever approach is used. Is the benefit of being able to show raw JSON as a fallback worth all the additional complexity elsewhere?

@zmanian
Copy link
Member

zmanian commented Apr 28, 2020

yes the whole strategy of signing the raw binary produces a system that is far too cumbersome to extend.

an isolated signing environment brings little benefit if you don't have access to secure display to confirm what you are signing.

The weak link in blockchain protocols are the humans that interact with them.

I can't emphasize enough that I am overwhelmingly opposed to signing non-self describing dataformats.

@tarcieri
Copy link

tarcieri commented May 6, 2020

I think for unknown fields we either need to make the choice that a) critical extensions are never added as regular fields or b) unknown fields always trigger failure.

I'd suggest taking a look at Adam Langley's blog post about extensibility. I think it winds up being pretty important:

https://www.imperialviolet.org/2016/05/16/agility.html

Actually @tarcieri, one way we could flag critical fields in protobuf is just with field number ranges. Say 1-127 is critical (what people would choose by default), 128+ is non-critical.

That sounds great! Although to really make it work, you need a Protobuf implementation that allows you to extract the unknown fields so you can check if any are critical.

@aaronc
Copy link
Member Author

aaronc commented May 6, 2020

I think for unknown fields we either need to make the choice that a) critical extensions are never added as regular fields or b) unknown fields always trigger failure.

I'd suggest taking a look at Adam Langley's blog post about extensibility. I think it winds up being pretty important:

https://www.imperialviolet.org/2016/05/16/agility.html

Thanks for sharing. So I think the big takeaways are make the extension system simple - in protobuf this is basically adding new fields - and allow newer clients to work with older servers - thus the desire to not just reject all messages which clients send with new fields but only ones that are flagged as critical.

Actually @tarcieri, one way we could flag critical fields in protobuf is just with field number ranges. Say 1-127 is critical (what people would choose by default), 128+ is non-critical.

That sounds great! Although to really make it work, you need a Protobuf implementation that allows you to extract the unknown fields so you can check if any are critical.

Yeah, we might need to do something hand-generated that takes the message descriptors and flags unknown critical fields.

@tarcieri
Copy link

tarcieri commented May 6, 2020

I'm not terribly familiar with the Go Protobuf ecosystem, but I believe you can use goproto_unrecognized for this purpose?

@aaronc
Copy link
Member Author

aaronc commented May 6, 2020

I'm not terribly familiar with the Go Protobuf ecosystem, but I believe you can use goproto_unrecognized for this purpose?

Yeah unfortunately it would break other stuff to enable that so it would likely need to be a separate parser pass.

@clevinson
Copy link
Contributor

clevinson commented May 6, 2020

We'll be discussing the topics related to this, and more specifically any outstanding blocking issues with ADR020 tomorrow on our bi-weekly architecture review call.

https://hackmd.io/UcQfA-JRQjG1zfKKWxH1gw

Zoom link in the HackMD if anyone from this thread would like to join.

@webmaster128
Copy link
Member

does the signature process need to concern itself with the contents of every message, or should that be the responsibility of the message handler?

Looking at the protobuf document, this is no issue: every message is stored in an Any field, and Anys are embedded as raw data into the containing document. So the signature handler only needs to understand the outer envelope (1-2 small layers). The rest is opaque data for the verifier, that can be passed to the type-specific message handler.

But the issue with protobuf is that once a message is parsed you just have a go struct and we don't currently have a mechanism for retrieving unknown fields. They're just discarded. So this rejecting messages with unknown fields needs to happen higher up in the stack at parse time.

You need to parsing steps anyways (either explicitely or hidden by gogoproto): (1) the outer document including the Any field and (2) the message.

So the extra fields we are talking about is when a client has a different message schema for the same type URL with extra fields? Well, then I guess the chain is the source of trueth and the client can add whatever data they want to a given message. But this is a matter of message type upgradability, not signature verification.

@aaronc
Copy link
Member Author

aaronc commented May 7, 2020

Yes, this is about message upgradeability, not signature verification per se. Although the canonicalization approach to signature verification prevents upgradeability (in the direction of newer clients interacting with older servers).

@clevinson
Copy link
Contributor

clevinson commented May 7, 2020

So summary from our architecture review call today (https://hackmd.io/UcQfA-JRQjG1zfKKWxH1gw?view), which focused on ADR020 (#6111)

We had consensus on the approach laid out in this PR, with a few minor additions:

  • Sequence number should never be allowed to be 0 (the first time a user submits a transaction), but rather should start with 1
  • We will proceed with the proposal from @aaronc in conversation with @tarcieri to reserve fields (1-1024) for critical fields, allowing non-critical fields above 1024 field number

Oustanding question:

  • We still had yet to come to a conclusive decision as to whether the PublicKey in SignerInfo uses oneOf vs Any to specify the type of publickey algorithm used.

EDIT: It is expected that the resolution of this PublicKey question is not a blocker for ADR020, but will be discussed & resolved seperately.

@aaronc Can you lay out more details for this outstanding point here?

@aaronc
Copy link
Member Author

aaronc commented May 13, 2020

There is a pretty big discussion that has been unfolding related to #6111 and #6174 in those PRs and now on discord. I think it has gotten to the point where it deserves its own list of alternatives with pros and cons. I will make my own attempt below.

The general context of this discussion is the UX of a multi-signer transaction. I will frame the various alternatives under three broad umbrellas - (1) Closed signers, (2) Open signers, (3) Open then closed signers.

(1) Closed signers (the current Cosmos SDK model)

Definition: the list of signers is fixed by the content of the messages. Extra signers are not allowed. Every signer must sign over the list of other signers

Multi-Signer UX:

  1. Tx composer proposes the transactions to possible signers and asks who will sign
  2. Tx composer composes messages with the list of signers that agreed to sign
  3. All signers that agreed to sign must sign
  4. Tx can be broadcast once the final signer signs

Pros:

  • Limiting the list of signers prevents vulnerabilities that can arise when malicious signers can add their signature to cause out of gas

Cons:

  • Limiting the list of signers means that the tx composer first needs to ask who will sign (phase 1) and then ask that same group of people to sign (phase 2)

Variation (a): Don't ask for pubkey and signing mode in step 1

This is effectively the current SDK model which doesn't include signing modes

Compared to variation (b):

  • Pro: Tx composer doesn't need to ask for pubkeys and signing mode in step 1, just account addresses
  • Con: Tx composer needs to over-estimate the gas limit because different pubkeys and signing modes may have different gas (possibly orders of magnitude difference for nested multisigs)
  • Con: an attacker could manipulate mode and pubkey flags if vulnerabilities existed there (this could maybe be mitigated by making everyone sign their own pubkey's and mode flags)

Variation (b): Ask for pubkey and signing mode in step 2

This is model proposed in #6111

Compared to variation (a):

  • Pro: prevents attacker manipulation of mode and pubkey flags if vulnerabilities existed at that layer
  • Pro: Tx composer can accurately estimate gas limit because all the pubkey types and signing modes are known
  • Con: Signers need to share their pubkey in step 1 and also fix the signing mode they will use in step 3

(2) Open signers (the Weave model)

Definition: extra signers not specified by messages are allowed

This is Weave's model. (@webmaster128 if I get something wrong here please correct me)

Multi-Signer UX:

  1. Tx composer composes messages and shares with list of potential signers
  2. Any signers can sign and broadcast immediately, as long as there are enough required signers the tx will succeed

Pros:

  • Much simpler workflow for say a 3 of 5 multisig - the tx composer does not need to know which 3 out of 5 signers will sign before composing the message
    Cons:
  • Malicious actors can intercept tx's and add extra signers that will cause out of gas errors and the fee payer will still pay the fee. Basically a zero cost transaction malleability attack

Variation (a): Original Weave model - no signature limit

As @webmaster128 shared, this had the following cons:

Before we discovered and fixed a bunch of attacks late 2019 you could do the following:

  1. Spam a ridiculous amount of unnecessary data as part of a signatures (since protobuf allows unnecessary fields)
  2. Add an arbitrary amount of unneeded signatures for no reason
  3. Change signers[0] which was used as a default value in some places, since signatures are unordered. This allowed someone else to turn any signer into the fee payer, among other things.

Variation (b): Fixed Weave model

According to @webmaster128:

  1. was a combination of carelessness and a type system that has no set type. The signers were originally designed as a set but then interpreted as an ordered list in some situations. This was solved by never using the order of signatures in the application.

A Cosmos SDK-style tx size limit did not exist. 1. and 2. were stopped by implementing a tx fee based on byte size. Now you can spam as much as you want, if you pay for it. Fees are calculated based on the expected number of signatures.

No gas system exists in Weave. But the tx size fee automatically puts a limit on the number of signatures. If someone adds too many signatures, the tx is rejected since too big. Otherwise it will pass.

I believe this is the actual issue: iov-one/weave#1091

While this solves some issues, it still appears to be exploitable (??), i.e. if the message was intercepted an extra signer could still add themselves and screw things up for the fee payer, is that true??

(3) Open then closed signers

This is the approach proposed in #6174

Multi-Signer UX:

  1. Tx composer composes messages without fixing the list of required signers and shares with a group of potential signers
  2. Any signers can sign whether or not they are required/expected
  3. As soon as there are enough signers, any signer can become the fee payer and sign and broadcast the final message that fixes the list of signers and the fee

Pros:

  • allows a UX very similar to (2) without the malleability attack surface
  • allows the fee payer to accurately estimate gas limit
  • allows the fee payer to know that they are paying for the verifications they want and not some random attackers signatures
    Cons:
  • compared to (2), requires the fee payer to be the last signer

Apologies for this being such a long post. I really want to get to the bottom of this and agree on an approach and move forward.

In my personal opinion, approach (3) seems to have the most benefits and least tradeoffs for multi-signer transactions. It allows the benefits of (2) without the downsides.

Any variation of (1) would be the status quo and acceptable. I don't see a huge difference between (1)(a) and (1)(b) - it's still the same number of phases and requires fixing signers before signing. I would prefer the guarantees of (b) but at this point whatever. Neither seems to actually give a very good multi-signer UX which I thought was the point of this.

That's why I recommend approach (3) #6174 - good security guarantees + good multi-signer UX.

@zmanian
Copy link
Member

zmanian commented May 13, 2020

I feel like 3 is the correct direction as well.

@webmaster128
Copy link
Member

webmaster128 commented May 13, 2020

Thank you @aaronc for the great summary. Here are a few additions/remarks and then separated my person vote.

(1)
Additional Pros:

  • The signature aggregation is fully parallel (i.e. signatures can be collected in any order since they don't depend on each other)

Regarding Con:

Limiting the list of signers means that the tx composer first needs to ask who will sign (phase 1) and then ask that same group of people to sign (phase 2)

The point does not apply to a multisig-pubkey, which is a single signature that can be satisfied by any signatures of the group that combined make up a valid multi-pubkey signature. For the required signers of the individual messages, it the required signers are deterministic already.

(1a)
Regarding Pro:

Tx composer doesn't need to ask for pubkeys and signing mode in step 1, just account addresses

The tx composer does not even need to ask for the addresses. They exist automatically as part of the messages.

(2)
Comment: Right, using the Weave model in Cosmos SDK would be exploidable since an external party can add unnecessary signatures causing the gas consumption to exceed the limit. If the manipulated tx is processed before the original tx, the tx fails but the gas cost is spent. In Weave this issue does not apply since there is no gas system.


There is a significat difference between (1a) and (1b): The list of required signer addresses is available for free locally in the messages. Needing to know the public key for each required signer address can only be automated if all pubkeys are stored on-chain, which is not given for new accounts. The communication overhead of needing to know the signing modes of each signer is relevant as it increases the roundtrips of human communication. Additionally it violates separation of concerns.

My vote goes to (1a) as it

  • is simple to implement
  • is close to the current SDK
  • no attack vector known as long as all signing modes enabled on chain are secure
  • is most convenient for humans since no dependencies between signers exist

ps.: there is a variant of (3), let's call it (4) which only allows a single top level signature and keeps all other signatures at a lower level. It comes with the same two-phase signing process as (3) but simplifies everything. Most transactions only need a single top-level signature (which can be one of a multi-pubkey groups) and the few transactions multi multiple inner signatures can be processed in two steps. It is a radical change. Let me know if someone is interested to learn more.

@aaronc
Copy link
Member Author

aaronc commented May 13, 2020

is most convenient for humans since no dependencies between signers exist

But that's not true because they still need to agree upon the list of addresses to include in the messages. If you wanted to coordinate 3 signers out of 5, you would still need to ask all 5, then get an ACK from 3 of them and then compose the messages and then have them sign it. I acknowledge that the weave model made this simpler and I think (3) gets pretty close.

@aaronc
Copy link
Member Author

aaronc commented May 13, 2020

Based on discussions on discord, I have updated my original PR #6111 with the following:

  • made a small modification to my initial approach where AuthInfo is split out from TxBody to gracefully enable (3) and/or (1)(a) in the future if so desired
  • approaches (3) as well as (1)(a) mentioned as possible future improvements
  • handling of signing modes for the SDK's nested multisig pubkeys added
  • unknown field handling added

I agree with @alexanderbez that we should try to keep our initial implementation as simple and as secure as possible. Even if we did come to consensus that (3) provides a better UX, it is beyond the scope of our current work to address this. But I think it's good to document the alternatives and then see what users ask for in practice. (1)(b) seems to be the simplest to implement with the most security guarantees and wouldn't preclude (3) and/or (1)(a) from existing as alternate signing modes in the future.

@alexanderbez
Copy link
Contributor

What is left to get the work started? What needs to be merged?

@aaronc
Copy link
Member Author

aaronc commented May 13, 2020

What is left to get the work started? What needs to be merged?

#6111, then maybe an init PR with the proto file and then implementing the signing logic.

@webmaster128
Copy link
Member

is most convenient for humans since no dependencies between signers exist

But that's not true because they still need to agree upon the list of addresses to include in the messages. If you wanted to coordinate 3 signers out of 5, you would still need to ask all 5, then get an ACK from 3 of them and then compose the messages and then have them sign it.

There are two types of multiple signatures currently discussed:

  1. The list of requires messages signers that is calculated from the GetSigners result of each message.
  2. A multisig account type that has a separate group account and thus a separate address. It has a special type of pubkey that encodes the participants and threshold.

The later results in a single (nested) top-level signature, i.e. is one signer no matter which group participants signed. So no coordination required on which participants will sign.

The first one is not suited to implement n/m multisig at all because the nature of the GetSigners() method makes flexible signers impossible. It is just different people who authorize different things. This is needed to do e.g. atomic trades where Alice signs a token send message that sends tokens to Bob, Bob signs a smart contract execution that transfers asset X to Carl and Carl signs a 3 message sending tokens to Alice, Bob and a charity. This gives us 5 messages signed by 3 pre-specified entities. The state of the current SDK is that for each of those parties you only need to know the addresses, which are part of the messages.

@aaronc aaronc mentioned this issue May 13, 2020
27 tasks
@clevinson
Copy link
Contributor

Closing this now, as this issue was targeting an update of ADR020.

#6213 will be used for tracking the actual implementation of the updated ADR as described in #6111

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

8 participants