Decoupling FIL+ term from storage marketplaces (FIP-0045) #313
Replies: 38 comments 113 replies
-
Great proposal! How does this work for data/deal aggregation? - rn a couple data brokers / many clients aggregates smaller data pieces into a 32gib deal, so when we “assign datacap to the piece”, is it assigned to the aggregated whole piece, or datacap is tracked individually according to each small pieces? |
Beta Was this translation helpful? Give feedback.
-
I was composing a comment, but it got way too long, so I made it its own document, here https://hackmd.io/@R02mDHrYQ3C4PFmNaxF5bw/r1ITc9u-c My main issue is in elaborating more on some of the potential problems you have already pointed out, in particular your last three "reasons for caution", and I add one which is that this proposal increases the incentive for SP's to seal for the shortest possible sector lifetimes. I also disagree that "Network-subsidised data" is a good thing we should do, or at least it is a separate question we could address intentionally, but if it can be avoided as a by product of FIL+ indefinite that would be good. I propose and elaborate that the solution to all these issues is having a separate FIL+ collateral, that is not coupled to the sector, but corresponds to the FIL+ status of the data itself. |
Beta Was this translation helpful? Give feedback.
-
Noting for followers that we're picking this up again soon. The next step is to prototype the implementation, which should help us find and resolve any uncertainties ahead of a well-defined FIP. |
Beta Was this translation helpful? Give feedback.
-
We have the same wish that more active data should be introduced into the FIL network as all ,which will expand Filecoin ecosystem and make the real application landed. |
Beta Was this translation helpful? Give feedback.
-
@anorth thanks for bringing this one back! I'm supportive of the decoupling of the subsidy step. I have some concerns about this bit:
In general, control over whether or not a piece of data is stored on Filecoin should be on the client / data owner side, not the side of the SP. I.e., there are use cases today where the data is only useful for n time, i.e., 1 year video backup, after which the data is not useful for the owner. Allowing an SP to continually extend its lifetime and serve it in a deal does not present a useful/valuable outcome. This is flagged in reasons for caution in the proposal, but I think this needs to be weighted higher and addressed from a philosophical standpoint of data ownership as well. Secondly - what's the solution for if a CID is identified to be not worthy of DataCap? I.e., someone initially gets away with abusing the system to get their data verified, but future checks/retrieval sampling/etc. result in this being flagged as not worthy of receiving DataCap? Is there a path other than a FIP to remove the verification status from a piece? |
Beta Was this translation helpful? Give feedback.
-
@anorth can I confirm this will be possible for existing verified pieces, as well as new ones? |
Beta Was this translation helpful? Give feedback.
-
There's a lot to like about this proposal, but I wonder how it speaks to the tensions we're feeling in the Fil+ network, between acceptable deals, and acceptable data. For instance, notaries spend a lot of time trying to identify whether deals are appropriately determined, so that storage providers aren't simply creating a deal internally, in order to obtain datacap, so we try to ascertain that a deal is legitimately made with an external partner. Then there's appropriate data, which is the idea that there's some data that publicly beneficial for the Filecoin network to host. II believe aspects need to be in place to a greater or lesser degree for the process to work -- maybe it matters less if a SP has gone out and arranged the storage of data that is genuinely a public good. Alternatively, it's given that uploading data that is supposedly beneficial doesn't really mean anything if nobody wants it, perhaps genuine deals with external parties point to it being sufficiently "useful" data (part of the impetus for the Fil Enterprise project). So my concern here is that this proposal leans a lot on the data itself being "useful", permanently, which is something it's pretty hard to determine in advance. This may need a narrowing of the scope of what Fil+ notaries generally verify, which is fine and possibly a direction we should be going in -- but it also means that verified data becomes a much higher hurdle. The demands and current economics of the ecosystem point in the other direction, though: verified deals are now a very high percentage of our overall deals, mostly because the increase in power makes them extremely attractive -- and, while I haven't checked, for some SPs, at their current pricing, it may be the difference between a profitable deal and an unprofitable one. If verified deals are a large percentage of total deals, now or in the future, then obtaining datacap -- very much especially permanent datacap! -- really does determine what level of return you get for your storage, and it goes without saying that as more datacapped content fills the network, not only does the value of particular datacap itself shrink (because it is an increasingly small percentage of the total datacap), the non-datacap data's value shrinks incredibly low. I guess what I'm saying here is that I would love to see some economic modelling of the consequences of this, under conditions where the allocation of datacap becomes harder, but the length of time datacap is held by an SP grows. I ask this for two reasons: what does that means to the decentralizing of the network, and how strict does it point to us needing to be to guard datacap provision? |
Beta Was this translation helpful? Give feedback.
-
I have begun drafting a formal FIP for this proposal in #432. I'm starting with the specification, as much of the motivation etc will be copied from this discussion. @ZenGround0 and I are sketching the implementation now, which should flush out any considerations I have missed. |
Beta Was this translation helpful? Give feedback.
-
@ZenGround0 could you please help me understand how this may work? so far it sounds like
on the storage provider implementation side, when they publish a deal to make the claim, how can they get the right range for CID_B? or is that something that needs to be provided by the client? |
Beta Was this translation helpful? Give feedback.
-
Whats the mechanism to ensure that all rangers are committed? |
Beta Was this translation helpful? Give feedback.
-
To echo @dannyob I'm pretty concerned with the direction this conversation is going re: explicitly verifying data on the network. I'd like to piece apart the functionality we'd like to see and the practical implementation, but I'd call out that the moment we start talking about the network verifying explicit bytes (and proposals for how to automate renewal for those bytes) we are opening up a philosophical and logistical can of worms. I feel pretty strongly that the network should not be in the position that it is making any sort of claim about the legitimacy of data (nor that it could be interpreted that the network is making those claims). The flow we have today is the network endowing individuals with datacap, who then make their own decisions about data. While subtly different, it feels substantially less problematic than saying explicitly that a specific dataset should be subsidized vs. another that might not be. You can imagine for a sensitive dataset the act of including a subsidy (or not including a subsidy) to be seen as a political one. Practically speaking this proposal could be interpreted as the network making some sort of endorsement of the data itself, which given the differences across global jurisdictions and laws could be seen as problematic. IMHO we should instead be focused on how we can automate the notary process (via incentives, where it is irrational to engage with self dealing) and migrate to more permissionless systems post FVM. I can elaborate on some ideas here (believe Nicola has some as well) - but dont want to derail from this topic. |
Beta Was this translation helpful? Give feedback.
-
@anorth, I was just chatting with @jennijuju and a question came up - what is the plan for handling changes introduced by FIP-0028 here? I assume the mechanism has to change somehow, but IIRC we need to account for that in this FIP before this can move forward. Here's the PR for that FIP: https://github.com/filecoin-project/FIPs/pull/226/files |
Beta Was this translation helpful? Give feedback.
-
This is a new thread for discussion of the capability for larger-than-sector FIL+ allocations, and the ranged claim approach to satisfy them. A number of the questions above are about that, so let's discuss it here. This proposal presents and opportunity to implement the data structures that will permit larger-than-sector allocations. The inability to do deals larger than a sector is apparently a pain at the moment. This proposal addresses the FIL+ part of that, but won't actually make such deals possible yet without further changes to the miner actor for #298. There are significant constraints on what it's possible to implement here. This could provide a huge scalability boost to FIL+, but can't come with much in the way of enforced policy mechanisms. The general design pattern here is to make FIL+ simple, and delegate all deal-related policy to market actors, which can act as delegates. If a client wants particular assurances, they should use a market that provides them. An example would be some collateral posted that would be lost if not all ranges are sealed. Note that the built-in market cannot support larger-than-sector deals and I have no intentions of doing the work to change that. Instead, I expect other, far superior market actors to emerge on the FVM. If there is significant opposition to making this possible, I'll just remove it from the proposal and then larger-than-sector allocations will have to wait some time (years?) to rise in priority enough to motivate a dedicated FIP. |
Beta Was this translation helpful? Give feedback.
-
@anorth from Fil+ side of things - we're going to need time to update all the tooling on our end if this FIP goes through. Specifically:
I'm sure there is more than this. If this FIP passes, would love to find a path where we can play around with the implementation in test/calib with enough lead time to ensure a smooth transition with the network upgrade cc @raghavrmadya @galen-mcandrew |
Beta Was this translation helpful? Give feedback.
-
Are there any examples of scenarios where differently architected market actors could be useful? |
Beta Was this translation helpful? Give feedback.
This comment has been hidden.
This comment has been hidden.
-
Great proposal. I have a few questions regarding technical details
|
Beta Was this translation helpful? Give feedback.
-
@anorth and all others involved in designing this FIP proposal, thanks for all the discussion over the last few weeks. Helped me clarify my understanding of this one substantially and I'm generally supportive of this. However, as part of this design change - based on what we have observed in the Fil+ community so far and where gaps lie on the governance front today, I'd like to propose the following amendment. Create a mechanism to change an allocation for a piece through a Fil+ governance mechanism. Primary motivations:
Proposed design: We can design and implement the governance process and tooling to make this process easier on the Fil+ in the few cases where it will be needed. Additional considerations:
|
Beta Was this translation helpful? Give feedback.
-
some notes from the CN SPWG call today on manual opt-in migration existing DC -> new QAP accounting:
SPs raised that they'd prefer flexibility, meaning they might prefer the possibility to define the term max themselves upon the opt-in migration. They will circle back after more analysis. |
Beta Was this translation helpful? Give feedback.
-
An opt-in per-existing-dc-sector migration by SP is being proposed in the exiting FIP draft, tl'dr: SP can extend any active datacap-ed deal up to current+540days, despite the current deal duration thats set by the client, by triggering a sector migration. We should try to reach consensus soon, as a community, on whether this is desirable:
|
Beta Was this translation helpful? Give feedback.
-
The current proposal (which only enables storage market actor as a delegator of datacap allocator) sets the default term min = deal duration, and term max is deal duration + . We shall seek consensus on the follow product decision:
|
Beta Was this translation helpful? Give feedback.
-
FYSA - @dkkapur & fil+ team, the current proposal states: A verified client can extend the term for a claim beyond the initial maximum term by spending new DataCap. meaning that the client will need to request more datacap for storing the data for longer term. This is similar to clients needs to request new datacap for renewal the deal by making new deals, or want to make extra replicas. just pointing out that you may see Datacap extension as a reason for datacap applications with this FIP |
Beta Was this translation helpful? Give feedback.
-
HI @anorth @dkkapur @mr-spaghetti-code I have to admit Im struggling to see, and (possibly this is why there is low community engagement):
This complex FIP would benefit from a service design mapping or mapping out the flow, if there is one can you share it? |
Beta Was this translation helpful? Give feedback.
-
per @dkkapur - this feature is not required/desired from Fil+ atm. However, if this is supported on the protocol level, there shall not exist major issues given currently, a verified client can simply spend datacap on any CID they'd like to without having actual data via offline deals. |
Beta Was this translation helpful? Give feedback.
-
similar to deal packing, storage provider's decisision on when to force precommit(bacth)/provecommit(aggregate) now has a new contraint, allocation expiration. using an extreme example, today, if
i personally dont think this is a huge deal given:
tho it is still a new constraint, thus wanna leave a call out. |
Beta Was this translation helpful? Give feedback.
-
Moving a question from @misilva73 from PR review
|
Beta Was this translation helpful? Give feedback.
-
PiKNiK supports FIP-0045 |
Beta Was this translation helpful? Give feedback.
-
Moving some questions from @dkkapur to the right forum:
The case you have identified is the only on-chain use of the client address in the proposed implementation. Of course it's great metadata to keep transparent. It might also be useful for other things later on.
A claim remains valid until its max term is reached, regardless of what happens to sectors. However, in the concrete version being implemented, if the sector that first claimed the allocation expires, there is no mechanism for the SP to seal that data into a new sector and "re-claim" the power. The claim remains in state, but isn't doing anything. I hope to add such a mechanism in the follow-up #298.
I'm not sure what you mean by "optimize", but your description of behaviour is mostly correct. A provider could claim all the allocations simultaneously (but with a unique copy of data for each).
Yes
Terms less than 12 months will be impossible to seal, so it would probably make sense to do that (but isn't critical).
We discussed this parameter live in the core devs call. Yes a range of numbers could work, trading off the buffer that makes deal packing easier with a tighter adherence to the deal's term. I would like longer, you pressed for shorter, we landed at 90d.
Yes, although I tend to take the perspective that clients making short deals will find a lower supply of sectors willing to take them. If FIP-0036 doesn't pass, I will immediately propose a FIP to increase max sector duration to 5y, which would make this issue go away. |
Beta Was this translation helpful? Give feedback.
-
FYI there's a retroactive update proposal for this FIP @ #961 to align type details to what has actually been deployed to the network. |
Beta Was this translation helpful? Give feedback.
-
FIP-0045 was referenced to be solving this legacy issue here, but it seems that 10x pledge is still locked when sector become CC from DC. Is this by design or overlooked? @anorth CC @rvagg |
Beta Was this translation helpful? Give feedback.
-
Update: FIP draft in progress on #432
With some changes we’ll be making to support more open programmability on the FVM, we have an opportunity to extend the term associated with FIL+ data cap allocations, possibly indefinitely. This would mean that a provider provably storing a piece of useful data could enjoy the boosted power and rewards for much longer, independently of limitations of deals with a storage market actor.
This post is a starter for discussion around whether long, extensible terms for verified data are positive for the network, and what restrictions or caveats we might need. I’ll follow up with concrete technical design soon.
Background
First, a brief overview of how the Filecoin Plus verified data program works today. The program’s operation is coupled with the built-in storage market actor.
So, the duration for which a FIL+ verified deal provides boosted rewards to a storage provider is limited by the deal term, in turn limited by the maximum sector lifespan for which we believe SDR PoRep is secure (i.e. a reason totally unrelated to FIL+ incentives). The Filecoin Plus data cap mechanism has no intrinsic notion of the term for which data cap, and hence the network-subsidised rewards, should be granted.
Protocol direction toward decoupling
The storage market actor’s prohibition of deal terms exceeding their sector’s commitment is a limitation of its simplified implementation. So is the inability to extend a deal. We would like to resolve both of these, and could do so by allowing the deal’s data to be moved/added to a new sector with a farther-out expiration. We could do so if not for some complications with Filecoin Plus quality-adjusted power. But there’s not technical constraint on arbitrarily long deal terms.
Separately, but pushing in the same direction, we are planning architectural changes to open up the platform for development of alternative market implementations on the FVM. Other markets might structure deals quite differently, and the built-in one should not have a monopoly on brokering FIL+ verified data. Thus, the verified registry actor will be decoupled from the built-in storage market so that verified data is accessible to all markets.
Each of these shifts—either fixing the built-in market or allowing new markets—would result in a FIL+ data cap allocation effectively having an indefinite term. Data cap has no intrinsic term, and the constraints on built-in storage market deal terms would be either resolved or irrelevant.
Network-subsidised data
I think that this decoupling, and subsequent realisation of arbitrary terms for verified data, is a positive change. It simplifies and clarifies many things. We can lean in to the potentially-indefinite term for verified data and make it a feature of the network.
By decoupling verified data cap from the built-in market, and tracking allocations of data cap to pieces of data explicitly in the verified registry actor, we can view a FIL+ allocation as an explicit data subsidy. The network will fund the ongoing, proveable storage of verified pieces through increased reward, independently of any client wishing to pay an additional incentive. Data that a verified client has blessed remains useful regardless of the terms or technical limitations of particular markets or of sector proof-of-replication algorithms, or even of the term for which a client wishes to pay extra for its storage. The Filecoin Plus program incentivises the storage of useful data on Filecoin and, if a verified client indicates, that data might be useful for far longer than a single sector’s lifetime, possibly indefinitely.
Simplicity
Minimising coupling is a key principle in software and systems engineering. Decoupling the verified data rewards from storage deals provides great opportunities for simplifying Filecoin’s core mechanics in order to provide a better platform for development.
One benefit we could immediately realise is simplifying the calculation of quality-adjusted power. The current mechanism, which is coupled to deal terms, imposes significant limits on the flexibility of sector storage, essentially rendering it write-once. Decoupling these neatly resolves the calculation difficulties to give full flexibility, and is much simpler than the Filecoin Plus premium proposal that also attempted this.
Proposed functionality
I’ll follow with a detailed design proposal in a FIP (work-in-progress here), but in the meantime here is a sketch of the properties I think we’d reach:
Stages to realisation
We might realise fully unlimited terms in a few stages, the latter of which need not land this year.
Reasons for caution
There are some reasons to be cautious about this change, and I’d like to invite a full discussion. I can start with a few possible concerns here.
Alternatives
If we don’t implement something like these bounds on explicit verified data allocation term, we would need to:
Or possibly there is another alternative model that would allow the flexibility we seek.
Beta Was this translation helpful? Give feedback.
All reactions