Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can't specify 'latest' string as my block in calls #5868

Closed
derrickpelletier opened this issue Dec 1, 2018 · 9 comments
Closed

Can't specify 'latest' string as my block in calls #5868

derrickpelletier opened this issue Dec 1, 2018 · 9 comments
Labels
area-blocktracker area-provider Relating to the provider module.

Comments

@derrickpelletier
Copy link

Tried to search for this but having no luck. Couldn't really find any leads in the codebase either (not super familiar with it though).

When calling a contract method, I'm attempting to specify 'latest' as my block number:

myContract.methods.foobar().call.request(opts, 'latest', cb)

But MetaMask seems to always replace this with an actual blockNumber, when I'd prefer it was just sending latest. If i specify my own explicit blockNumber MetaMask does not replace it.

Trying to find a way around this but not having any luck. What can I do here? And whats the reason for this behaviour?

Expected behavior
I expect MetaMask to use the blockNumber string I specify.

Browser details (please complete the following information):

  • OS X
  • Chrome
  • MetaMask Version 5.0.3
@frankiebee frankiebee self-assigned this Dec 3, 2018
@frankiebee frankiebee added the area-provider Relating to the provider module. label Dec 3, 2018
@danfinlay
Copy link
Contributor

The reason for this behavior is that MetaMask is often pointed at load-balanced providers, where latest can vary between nodes as they synchronize. To keep internal consistency, MetaMask only asks the backend for the next block by number.

You're saying MetaMask doesn't allow you to request latest, or that it always translates this to a call to a specific block number? Can you go into detail about why a translation to the latest known block number doesn't meet your needs?

@derrickpelletier
Copy link
Author

Yeah! So there are a couple scenarios where this is an issue for us, but the main problem is:

  1. Initiate a contract method call.
  2. MetaMask puts the latest known block in the call.
  3. payload sent to infura
  4. Sometimes (often enough that it's a problem) infura will respond with a bad result: '0x'

I confirmed with infura devs and this happens when you're requesting a block number that the load balancer sends to a node, and the node does not have this block yet.

I've recreated this scenario in this gist
https://gist.github.com/derrickpelletier/877dc6196ac6e00028c0b576ef18daf7

If i could specify 'latest', in my tests I get around these failures. Sometimes we run 1-3 contract calls before even sending the transaction, and if any one of those contract calls fails we don't send the tx, but many of the failures we do see are unnecessary.

@derrickpelletier
Copy link
Author

Also, I totally get what you're saying about internal consistency, but for the calls I make explicitly shouldn't I have control over that aspect? And MM can keep using the latest-known-block for it's own internal calls for sake of consistency?

@frankiebee
Copy link
Contributor

@derrickpelletier thanks for the investigation with infura you have answered a weird bug i've been hunting for the past two days! i'll see what we can do to tighten up the latest request because ideally both your request and are request are in sync because of transaction processing. If the dapp knows something that metamask dosent or visa versa it can cause some nasty bugs during transaction signing.

@derrickpelletier
Copy link
Author

@frankiebee whew, glad it is helpful—I should have shared the gist sooner.

If you're tightening up the latest call, I don't know if you can fully escape the problem though. As the latest block number is still fetched from infura, so you're still prone to getting some node further along than other nodes on the network.

If i'm not mistaken you can't fully get rid of this issue unless infura was to ensure their load balancer balances across nodes that in sync.

Also, your point about tx signing is valid for sure. Is it possible that maybe 'latest' was an "allowed" block number during call's, but on send's you overwrite it with MM latest known block? Something to just give a bit more flexibility?

@derrickpelletier
Copy link
Author

@zmadmani makes a good point in #5588 (comment)

I've also tried retry with an increasing delay with very minimal success even at ~6 seconds for the last attempt (which is far too much already when considering the ux).

@zmadmani
Copy link

zmadmani commented Dec 5, 2018

Same here @derrickpelletier, I've only been able to reliably get it to fix itself after a refresh (which usually causes a separate call to fail). Have you found a way around the caching to force a re-fetch?

About the caching problem itself... I guess I can understand why you might want to cache certain errors from the blockchain (resulting from blockchain operations), but this specific type of error are not from the blockchain rather from the infrastructure. I think an error that is coming from failures in the infrastructure should not be cached since it's not actually a valid answer to a query. Any thoughts?

@giltotherescue
Copy link

Just wanted to chime in to say that I have also experienced this issue. It's quite a challenge as we basically have no recourse for the user when it occurs. The issue does not happen when using our own RPC node, so it does seem like an issue related to Infura.

@danfinlay
Copy link
Contributor

Tracing this issue we have discovered a bug in geth that also is the cause of several intermittent bugs we've been tracking ourselves:
ethereum/go-ethereum#18254

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area-blocktracker area-provider Relating to the provider module.
Projects
None yet
Development

No branches or pull requests

7 participants