Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CIP-31: Reference inputs #159

Merged
merged 5 commits into from
Mar 1, 2022

Conversation

michaelpj
Copy link
Contributor

This CIP proposes adding "reference inputs" in the style of Ergo's "data inputs".

- Referenced outputs are _not_ removed from the UTXO set if the transaction validates.
- Reference inputs _are_ visible to scripts.

Finally, a transaction must _spend_ at least one output.[^2]
Copy link
Contributor

@L-as L-as Nov 29, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Even if you did somehow "allow" this, wouldn't this be impossible since you need to pay the fee somehow? I.e. you could maybe reword this, if you think that it is relevant.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Implicit coin (reward withdrawals + deposit reclaims) can be used to pay the fee, so you could allow it

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You could potentially rely on a mechanism like this. Nonetheless, transactions today are already required to spend at least one UTXO, even if e.g. they could cover the fee with withdrawals. We simply don't change that restriction here.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

michaelpj:mpj/reference-inputs

CIP-0031/README.md Outdated Show resolved Hide resolved
CIP-0031/README.md Outdated Show resolved Hide resolved
CIP-0031/README.md Outdated Show resolved Hide resolved
@delicious-lemon
Copy link

delicious-lemon commented Dec 6, 2021

I think we need a feature like this but I have a suggestion for an alternate implementation. My concern is that we may regret losing the can only be read once affine nature of UTXOs. I think this is a core feature of the UTXO model we should be hesitant to dispense with.

We could implement something that looks more like a Reader Monad than a reference input. To do that we would need to introduce the concept of a constrained (or content addressable) input and defer some construction of the Tx to the node.

All we need to say is that we want any TxOut that matches exactly the data we want to read

data TxOut = TxOut {
    txOutAddress   :: Address,
    txOutValue     :: Value,
    txOutDatumHash :: Maybe DatumHash
    }

and so long as the transaction reproduces an identical TxOut we can form a Reader Monad like chain through which the data is threaded. This means these transactions can form a chain within a block since none of the transactions in the chain care about a specific TxOutRef.

If the transaction does not reproduce exactly this output then further reading transactions will fail in the node transaction construction phase since they cannot find a matching input.

This could be done efficiently by introducing a new content addressable TxOut lookup table in the node. We currently have a TxOutRef -> TxOut table. We could add a TxOutHash -> Set TxOutRef table and allow specifying inputs and outputs by reference using a TxOutHash. This table could be opt in for a fee.

Some advantages to this approach are:

  1. UTXOs are still spend/read once only things
  2. we get read-only input like behavior through pipe lining multiple operations per block
  3. there is no need to consider conflicts when a read-only input is spent
  4. we get "controlled referencing" automatically since you must be able to spend the UTXO to read it

@WhatisRT
Copy link
Contributor

WhatisRT commented Dec 7, 2021

I think we need a feature like this but I have a suggestion for an alternate implementation.

While it's nice to keep some of the UTXO's current properties, I see three problems with this proposal:

  1. It introduces a non-deterministic aspect into the UTXO. It is not entirely specified here how looking up an output would work, but there are essentially two options: either the 'old' TxIn and the 'new' TxIn both work, or only the 'old' one works (there's also the option to only do these things for the current block, and all future blocks use the 'new' TxIn). If we disallow for the 'new' TxIn to be used, there can be an unexpected failure for users that didn't know about this feature and unexpectedly triggers it. And if we allow for both, then an output can obtain a large number of TxIn's that refer to it.
  2. It's a lot more difficult to implement than what this CIP proposes.
  3. It requires active collaboration. This could be enforced by validators, but that would make it expensive.

@michaelpj
Copy link
Contributor Author

@delicious-lemon

To do that we would need to introduce the concept of a constrained (or content addressable) input and defer some construction of the Tx to the node.

This is an interesting idea, but it loses another property that we care about: that the transaction specifies exactly what it does, and is self-contained. For example, losing that threatens the determinism of script execution (I think we could probably keep it working, but we'd have to think carefully about it).

You could imagine implementing something like this as a layer 2 solution, perhaps, where "partial" transactions are elaborated into "fully-specified" transactions in order to be included in the base settlement layer. But I feel less positive about it as a feature of layer 1.

Plus, this "constrained script inputs" feature is much more powerful than just reference inputs. It feels odd to implement it just to do referencing. Perhaps there are other use cases that make it compelling, but otherwise it feels like we're adding a powerful feature somewhat blindly, which might lead to unexpected outcomes.

and so long as the transaction reproduces an identical TxOut we can form a Reader Monad like chain through which the data is threaded

Notably, this means you need to implement more of the logic in the script that locks the output: it has to insist that you produce a matching script when you reference it.

we get "controlled referencing" automatically since you must be able to spend the UTXO to read it

It's not even clear if "controlled referencing" is desirable. And if we did do it, we'd probably want to use a different script than the script that controls spending: taking the oracle example, you don't want the same script to control who can use the data and who can reference it. (Well, maybe you could make it work with a clever script and redeemers.)

At any rate, I think the design there is less clear, and your proposal actually gets rid of uncontrolled referencing, which I think is definitely useful.


Anyway, I think this is definitely an interesting idea and I'd encourage you to write it up if you're keen.

@delicious-lemon
Copy link

delicious-lemon commented Dec 7, 2021

1. It introduces a non-deterministic aspect into the UTXO. It is not entirely specified here how looking up an output would work, but there are essentially two options: either the 'old' `TxIn` and the 'new' `TxIn` both work, or only the 'old' one works (there's also the option to only do these things for the current block, and all future blocks use the 'new' `TxIn`). If we disallow for the 'new' `TxIn` to be used, there can be an unexpected failure for users that didn't know about this feature and unexpectedly triggers it. And if we allow for both, then an output can obtain a large number of `TxIn`'s that refer to it.

I am not sure what you mean. Perhaps there is some misunderstanding. I propose we add the ability to look up TxOutRefs on the ledger by the hash of the TxOut triple (Address,Value,Datum). This would be an extension to the current functionality so there is no danger that it will interfere with how anything currently works. This would be an opt-in feature since this requires storing additional data on the ledger. i.e. you make a Tx with an output and flag that it should be stored such that it can be looked up by its TxOutHash.

On non-determinism - that's the whole point. We get non-determinism w.r.t. TxOutRef but maintain determinism w.r.t. TxOut content - we can still verify up front that the Tx will validate if a TxOut can be found with the specified TxOutHash. The lookup for a TxOutHash will be a similar cost to the lookup for a TxOutRef. If this step is successful validation proceeds as normal otherwise we fail as normal.

2. It's a lot more difficult to implement than what this CIP proposes.

Why do you believe this? I state the opposite to be true. :)

3. It requires active collaboration. This could be enforced by validators, but that would make it expensive.

I am not sure what you mean by this. I am not proposing we change how validation works and I agree we should keep validation deterministic.

@delicious-lemon
Copy link

@michaelpj

Perhaps constrained inputs is the wrong name for this. That's a more powerful bag of magic that I agree we shouldn't look into for layer 1. I'm suggesting we could specify inputs and outputs to be TxOuts that already exist on the ledger with exactly matching Address, Value, Datum, identified by a hash of this triple. The node can complete this Tx closure by computing the TxOutRefs as required. This keeps validation deterministic and allows us just enough non-determinism to get reference-input like behaviour without having reference inputs.

I'm not sure whether this would be more work to implement than reference inputs. I feel like it may be considered less powerful though since we would lose uncontrolled referencing. My concern is that losing the affine read-once nature of UTXOs might make the computation model more complex and thus harder to audit and prove properties for - though I have no evidence for this, it's just a feeling.

Anyway, I think this is definitely an interesting idea and I'd encourage you to write it up if you're keen.

I'll put some markdown together. 👍

@L-as
Copy link
Contributor

L-as commented Dec 7, 2021

I think that "content-addressed UTXOs" is a good idea, because the location/transaction of the UTXO doesn't matter, the only thing that matters is its content. I don't see why determinism would be lost if you have to specify the exact content of the UTXO, but if you support only specifying it partially, e.g. a UTXO with X token, without caring about the datum, that is a lot harder to make deterministic.

However, this doesn't replace the need for a reference input, and it will not work well at all with CIP-33. The two extensions to the ledger aren't mutually exclusive as they don't conflict, so I don't think it's a good reason not to add this feature to the ledger.

@delicious-lemon
Copy link

@L-as I think CIP-33 could work with "content-addressed UTXOs". We would just need a way of saying "this is the hash of the script, it should be in this UTXO" and have a mechanism for pulling it out of the "content-addressed UTXO".

I am in agreement that these two ideas could coexist in the system. Read-only access to the ledger would be very useful. Additionally - perhaps content-addressed UTXOs are more powerful than I initially estimated - consider a UTXO that represents a DEX as a content-addressed UTXO supporting an order book with execution/matching decided by the node. Users could place an order relative to a content-addressed UTXO that they speculate will come into existence and hope that a node will be able to engineer a sequence that supports their transaction. The node could be performing some application specific arbitrage operation to enable the sequencing.

@michaelpj
Copy link
Contributor Author

I'm not sure whether this would be more work to implement than reference inputs.

FWIW my belief is that yes, this would be much more work to implement. Reference inputs requires a tag on inputs, some small changes to the ledger rules, and some small changes to the transaction context. "Constrained inputs" requires changes to how nodes construct blocks, changes to the transaction format, new kinds of entity, possibly a content-addressed lookup store... lots of things.

@WhatisRT
Copy link
Contributor

WhatisRT commented Dec 8, 2021

@delicious-lemon

  1. Yes, I misunderstood your proposal there, but of course the problem of non-determinism is still present. Which output is picked if there are multiple outputs with the same content? Maybe some of them are spendable with the transactions witnesses and some are not. One could of course make up a consistent scheme (use the one with the smallest TxIn that the transaction can spend) but we care a lot about knowing exactly what a transaction will do before sending it. Even if we can make sure that Plutus scripts won't change their result, this is still a property that I'd argue we care more about than the read once nature of the UTxO.
  2. I've already written the spec for this proposal, which is very small. Your proposal does touch a lot more places.
  3. This is the same point as 'controlled referencing'. My argument is that to allow anyone to use a reference would require a Plutus validator to force the person using it to also return it to its place. Which means that then using it for CIP-33 could then pull in a bunch of extra scripts that are just used for validating the reference. It should still be possible to not have to supply any scripts in that scenario, but there's a lot of expensive overhead.

- The spending conditions on referenced outputs are _not_ checked, nor are the witnesses required to be present.
- i.e. validators are not required to pass (nor are the scripts themselves or redeemers required to be present at all), and signatures are not required for pubkey outputs.
- Referenced outputs are _not_ removed from the UTXO set if the transaction validates.
- Reference inputs _are_ visible to scripts.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Together with CIP-33, reference inputs would then do two things: Be part of the context passed to scripts, and if they contain a reference script, that would get added to the witness set. This combination is a bit arbitrary. I worry about a future where we suddenly need to have different types of reference inputs because we need to be able to reference some of the things but not others.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

michaelpj:mpj/reference-inputs

Copy link
Contributor

@GeorgeFlerovsky GeorgeFlerovsky Dec 15, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Together with CIP-33, reference inputs would then do two things: Be part of the context passed to scripts, and if they contain a reference script, that would get added to the witness set. This combination is a bit arbitrary. I worry about a future where we suddenly need to have different types of reference inputs because we need to be able to reference some of the things but not others.

If a transaction's witnesses map contains additional (script hash, script source) pairs from the reference inputs, does it really make a difference? All of the transaction's inputs were intentionally locked by specific script hashes, so they would never unintentionally refer to these additional scripts. In other words, I don't see how including extra/unused scripts in the witnesses would ever change the transaction validation result.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Be part of the context passed to scripts, and if they contain a reference script, that would get added to the witness set.

My intention was that with CIP-33 any input that corresponds to an output with a reference script would see it added to the witness set. So I claim that reference inputs still only do one (conceptual) thing: they let you look at all the information in an output. It seems reasonable to me that looking at the information in an output that contains a reference script should let you use the reference script as a witness.

Rephrasing your worry, though, what you're suggesting is that we might want to e.g. restrict the information that a reference input lets us look at. I can't see a reason for that, but maybe there is one.

@michaelpj
Copy link
Contributor Author

I've updated the text with some clarifications and a small discussion about controlling referencing.

This could potentially be an entire additional address, since the conditions might be any of the normal spending conditions (public key or script witnessing).

However, this would make outputs substantially bigger and more complicated.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it's cleaner to add an optional "reference validator script" field rather than having "check inputs". How much overhead will an empty field add to the serialisation of a UTXO?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I also prefer the optional "reference validator script" field, it does seem cleaner. To be really explicit about the behavior: If the reference validation script was absent it would be the same as validation passing. I like the model of having separate validators for the two semantic actions of referencing vs spending.

@michaelpj
Copy link
Contributor Author

Dear community, I need some input. It is unclear to me whether controlled referencing (as defined in the CIP text currently) is a key feature for people or not.

Please react to this comment with your feelings:

  • 👍 for "reference inputs are useful to me regardless of controlled referencing"
  • 😕 for "reference inputs are useful to me without controlled referencing, but I also want controlled referencing"
  • 👎 for "reference inputs are useless to me without controlled referencing"

@GeorgeFlerovsky
Copy link
Contributor

GeorgeFlerovsky commented Jan 6, 2022

Reference inputs without controlled referencing are useful in applications where the referenced data is not monetized (i.e. compensation not expected to the data originator), because the dApp is referencing its own intrinsic data. Examples:

  • Users reading a dApp's shared global state, so that they can actually use the dApp. This state does not need to be monetized directly itself, because usage of the dApp can be monetized separately via the usual spending validators.
  • Parallelized computation where one thread needs to know how far the other threads have progressed, as of the current active UTXO set. Reference inputs are needed here because each thread should have the exclusive right to mutate its progress, without contention from the other threads checking its progress, but I don't think we need controlled referencing for this.
  • Updateable dApps (using this CIP and CIP-33: Reference scripts #161), where general dApp scripts reference a script from a UTXO that holds a particular dApp NFT. The reference script can be updated via dApp governance, whereby community voting can allow the dApp NFT UTXO to be consumed and its NFT to be moved to a new UTXO with an updated script.

Referencing conditions (i.e. different conditions for referencing than spending) are useful in applications where the referenced data itself is being monetized, because the data comes from outside of the dApps that use it. In other words, the data provider does not derive sufficient incentive from the benefits that users gain from using the data, and must be compensated separately in order to provide the data. Examples:

  • Oracles that inject real world data into the blockchain (e.g. financial data, world events, births/deaths), as a service.
  • One dApp wants to allow other dApps to "officially" subscribe to its state, with designated UTXOs under controlled referencing, as a way of subsidizing its operation.

The main benefit that referencing conditions provide is the ability for a user to prove on-chain that she met the data provider's terms for the data that she has used. For example, upon referencing a UTXO with referencing conditions, the user can mint an NFT that will witness to subsequent transactions that the referencing conditions have been met in this transaction. Such NFT witnesses would allow:

  • The data provider to accept liability claims for its data only from users who hold such NFT witnesses of compliant use.
  • One dApp community to mandate that its users must directly reference another dApp's designated UTXOs with referencing conditions, or else provide a witness for having done so recently.

On the other hand, the usefulness of "check inputs" controlled referencing (i.e. referencing allowed if spending conditions are met) varies by context:

  • In the multi-user context, it's useless because it means that anyone who can reference the input can also spend/destroy it. In my opinion, granting spending rights to users defeats the purpose of reference inputs in the multi-user context, because now any user can unilaterally prevent other users from referencing the UTXO in subsequent blocks.
  • In the single-user context, it can be seen as a degenerate case of referencing conditions, whereby the UTXO owner can legitimately reference the UTXO without meeting its referencing conditions (if any), and can mint an NFT witness of having done so, which can be shown to any scripts that require such witnesses.

@michaelpj
Copy link
Contributor Author

@GeorgeFlerovsky thank you for that useful summary. I agree with most of what you wrote. A couple of things.

A key use case for reference inputs is to support CIP-33 (reference scripts). I think those would be pretty useful with just reference inputs, although there are perhaps interesting opportunities with controlled referencing too.

On check inputs:

In the multi-user context, it's useless because it means that anyone who can reference the input can also spend/destroy it.

That's not necessarily true, as I hinted in the text. You can use the redeemer to control it:

  • Redeemer is data Action = Check | Spend ...
  • Validator checks that
    • If the redeemer is Check, then the input appears in the list of check inputs, and the referencing conditions are met
    • If the redeemer is Spend, then the input appears in the list of normal inputs, and the spending conditions are met

So you can encode different referencing and spending conditions into a single validator.

(Aside: this makes me realise that it is not true that check inputs give a proof that you could spend the output... the example above gives exactly a case where you could "check" an output but not spend it!)

@GeorgeFlerovsky
Copy link
Contributor

GeorgeFlerovsky commented Jan 6, 2022

On check inputs:

In the multi-user context, it's useless because it means that anyone who can reference the input can also spend/destroy it.
That's not necessarily true, as I hinted in the text. You can use the redeemer to control it:

  • Redeemer is data Action = Check | Spend ...
  • Validator checks that
    • If the redeemer is Check, then the input appears in the list of check inputs, and the referencing conditions are met
    • If the redeemer is Spend, then the input appears in the list of normal inputs, and the spending conditions are met
      So you can encode different referencing and spending conditions into a single validator.

Cool! In that case, would it be accurate to say that the following two are equivalent?

  • Under the "referencing conditions" scheme, a UTXO with spending validator Datum -> SpendingRedeemer -> ScriptContext -> Bool and a referencing validator Datum -> ReferencingRedeemer -> ScriptContext -> Bool.
  • Under the "checked inputs" scheme, a UTXO with a combined spending/referencing validator Datum -> CombinedRedeemer -> ScriptContext -> Bool, where data CombinedRedeemer = Spending SpendingRedeemer | Referencing ReferencingRedeemer

In other words, if you squint your eyes, then the "referencing conditions" scheme looks like the (a,a) type, and the "checked inputs" scheme looks like (Bool -> a), which are equivalent because a × a = a².

If they are equivalent, then perhaps we don't need the additional/optional "referencing conditions" field. Of course, there might be some overhead involved with combining spending and referencing logic into a single validator.

@asutherlandus
Copy link

asutherlandus commented Jan 6, 2022

I think that is an interesting suggestion, but am wondering if these two are not equivalent in terms of side effects. I believe one reason for the separate fields:

We introduce a new kind of transaction input, a reference input.
Transaction authors specify inputs as either normal (spending) inputs or reference inputs.

Is for the transaction author to specify the intended semantics to the interpreter to determine if the input participates in the transaction balance check and if the UTXO gets marked spent.

Which makes me wonder if this idiom:

Redeemer is data Action = Check | Spend ...

would get used very much in practice, unless for some reason you wanted to handle both cases with only 1 script.

I believe you could combine them if you added a flag to the Context and it was visible to the interpreter. The current proposal seems very economical, but perhaps not as explicit.

Please correct me if I got any of the semantics wrong. I'm not super confident in my understanding of how this all works.

@anton-k
Copy link

anton-k commented Jan 7, 2022

I'm for separate reference input field in the context and not to mix them with ordinary inputs. And can they be not a packed to a set on a node level? So that it's possible to just index them as it's possible for outputs. That would be great to have. Fixing that for inputs is also great to have. But it's a different story.

@michaelpj
Copy link
Contributor Author

Thanks for the input, everyone. Given the timelines on which we'd like to do this work, the lack of design consensus, and the lack of anyone saying that the lack of controlled referencing makes this CIP worthless for them, I'm going to leave it out of scope for this CIP. We can revisit it in future.

With that said, I think this is ready to be merged as Draft. I will revisit it and update it to Active once the implementation has progressed and e.g. the CDDL is pinned down.


I'm for separate reference input field in the context and not to mix them with ordinary inputs.

Yep, that's the current proposal.

And can they be not a packed to a set on a node level? So that it's possible to just index them as it's possible for outputs.

That's definitely out of scope. For this proposal I think the interface should be the same as for normal inputs, and if we change them we should change them both. Perhaps you should write a CIP :)

@peterVG
Copy link

peterVG commented Jan 22, 2022

This CIP will be ground-breaking for Cardano oracles which is what I'm working on. There's no point continuing with the current architecture restrictions (having to spend transaction outputs to read data + only one script can read per block) if CIP-31 is nearby. I realize there are many extenuating factors but it sounds like IOG is fast-tracking this CIP, is that right @michaelpj? I'm trying to determine whether I can expect this CIP feature in the next fork/chain update. AFAIK those are scheduled for February and then June right?

@KtorZ
Copy link
Member

KtorZ commented Jan 25, 2022

it sounds like IOG is fast-tracking this CIP,

No one is fast-tracking any CIP 😊 ! Plus, there's a clear separation between CIPs (which are proposals of possible solutions) and actual implementations. While IOG is seemingly working on implementing CIP-0031, CIP-0032 and CIP-0033; they are still following the same process as others CIPs, going through multiple rounds of reviews and validations by editors and the community 👍

@peterVG
Copy link

peterVG commented Jan 25, 2022

Well, to be clear. I would love for IOG to fast track this particular CIP. It can't come on-chain soon enough IMHO. I say "fast track" because John Woods, Director of Cardano Architecture at IOG has publicly mentioned this CIP and it's two related CIPs twice now in Cardano 360 updates. That's how i found out about them. I understand and respect that all CIPs have to go through the same review and editorial process but I also realize that the individuals involved can choose to prioritize that work for whatever reason (i.e. "fast track").

@michaelpj
Copy link
Contributor Author

This isn't the place to discuss timelines.

@peterVG
Copy link

peterVG commented Jan 26, 2022

Forgive my ignorance @michaelpj, I am brand new to the CIP process. I am an interested party in this CIP, my oracle project will benefit greatly from it. I am trying to plan my own Cardano development activities accordingly. I currently have no sense of whether to expect to see this CIP live in 1 month or 1 year. Please direct me to the correct forum to ask about timelines. Thank you.

@rphair
Copy link
Collaborator

rphair commented Jan 26, 2022

@peterVG there would be others willing to discuss the timeline(s) & other advocacy in this forum category if you create a thread there: https://forum.cardano.org/c/developers/cips/122

If & when these forum discussion threads generate insight into a CIP itself, sometimes the authors will also include them in the Comments-URI section in the CIP header.


This is actually a very important feature.
Since anyone can lock an output with any address, addresses are not that useful for identifying _particular_ outputs on chain, and instead we usually rely on looking for particular tokens in the value locked by the output.
Hence, if a script is interested in referring to the data attached to a _particular_ output, it will likely want to look at the value is locked in the output.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fix: look at the value that is locked in the output

After discussion with the ledger team, there is a preference for
sticking to the principle of never silently omitting information. For
this reason instead of just omitting reference inputs when creating the
transaction context for old scripts, we ban that occurrence.

Also, optional datums -> extra datums
@michaelpj
Copy link
Contributor Author

I updated the proposal to follow the principle that we should never silently omit information. That means that instead of silently omitting reference inputs when creating the context for old scripts, instead we will fail a transaction which spends from an old script and includes reference inputs in phase 1.

@michaelpj
Copy link
Contributor Author

See also #161 (comment).

@crptmppt
Copy link
Contributor

crptmppt commented Feb 14, 2022

This was discussed at the Editor meeting 38 (see notes)- it is assumed on hold until further development (do flag if ready to review again)

@michaelpj
Copy link
Contributor Author

It's ready for review.

Copy link
Member

@KtorZ KtorZ left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewed addition about forbidding usage of reference inputs in conjunction with old Plutus script (i.e V1). This makes sense; limiting foot-guns when building a DeFi ecosystem definitely makes sense.

@mangelsjover
Copy link
Contributor

mangelsjover commented Feb 28, 2022

Hi @michaelpj

Could you please add a justification of why this should be added as a Phase 1?

Thank you!

M.

@michaelpj
Copy link
Contributor Author

Could you please add a justification of why this should be added as a Phase 1?

I'm sorry, I don't understand what you're asking for. Everything is phase 1 by default, only actually running scripts is phase 2.

@KtorZ
Copy link
Member

KtorZ commented Mar 1, 2022

@michaelpj in the last editor, if you recall, someone in the chat brought up the question of why should the ledger fail during phase-1 validation when presented with reference inputs and a PlutusV1 script; So the idea was to provide a rationale for that, but, I see that the rationale is already there actually:

  • Omitting information may lead scripts to make assumptions about the transaction that are untrue; for this reason we prefer not to silently omit information as a general principle.
  • That leaves us only one option: reject transactions where we would have to present information about reference inputs to old scripts.

Thus, happy to proceed with that one as discussed 👍

@KtorZ KtorZ merged commit 642a006 into cardano-foundation:master Mar 1, 2022
@L-as
Copy link
Contributor

L-as commented Mar 30, 2022

Question @michaelpj: Can a transaction reference a UTXO if it's consumed in the same block by another transaction?

@WhatisRT
Copy link
Contributor

Yes, blocks are essentially irrelevant to what's going on in the UTxO set. As long as the transaction that references an output comes before the one that spends it everything works, even if they are in the same block. It also works in reverse, an output can be created and referenced in the same block as well.

But if you send transactions that depend on each other in such short succession that they might end up in the same block, there's some chance that they might not be in the order you've sent them, which would mean that only one of them would actually end up on the chain (and the other one would need to be submitted again).

@michaelpj
Copy link
Contributor Author

I think there's one corner that isn't specified here, which is what happens if you try and both reference and spend an input in the same transaction. That's more of a corner case, however, and I don't think it matters terribly much, so we can just pick one in the spec.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.