Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ECIP-1049: Change the ETC Proof of Work Algorithm to Keccak-256 #13

Closed
p3c-bot opened this issue Jan 16, 2019 · 130 comments
Closed

ECIP-1049: Change the ETC Proof of Work Algorithm to Keccak-256 #13

p3c-bot opened this issue Jan 16, 2019 · 130 comments
Labels
status:5 last-call ECIP has been accepted and is waiting for last-call reviews. type: std-core ECIPs of the type "Core" - changing the Classic protocol.

Comments

@p3c-bot
Copy link
Contributor

p3c-bot commented Jan 16, 2019

Recent thread moved here (2020+)


lang: en
ecip: 1049
title: Change the ETC Proof of Work Algorithm to the Keccak-256
author: Alexander Tsankov (alexander.tsankov@colorado.edu)
status: LAST CALL
type: Standards Track
category: core
discussions-to: #13
created: 2019-01-08
license: Apache-2.0

Change the ETC Proof of Work Algorithm to Keccak-256

Abstract

A proposal to replace the current Ethereum Classic proof of work algorithm with EVM-standard Keccak-256 ("ketch-ak")

The reference hash of string "ETC" in EVM Keccak-256 is:

49b019f3320b92b2244c14d064de7e7b09dbc4c649e8650e7aa17e5ce7253294

Implementation Plan

  • Activation Block: 12,000,000 (approx. 4 months from acceptance - January 2021)

  • Fallback Activation Block: 12,500,000 (approx. 7 months from acceptance - April 2021)

  • If not activated by Block 12,500,000 this ECIP is voided and moved to Rejected.

  • We recommend difficulty be multiplied 100 times at the first Keccak-256 block compared to the final Ethash block. This is to compensate for the higher performance of Keccak and to prevent a pileup of orphaned blocks at switchover. This is not required for launch.

Motivation

  • A response to the recent double-spend attacks against Ethereum Classic. Most of this hashpower was rented or came from other chains, specifically Ethereum (ETH). A separate proof of work algorithm would encourage the development of a specialized Ethereum Classic mining community, and blunt the ability for attackers to purchase mercenary hash power on the open-market.

  • As a secondary benefit, deployed smart contracts and dapps running on chain are currently able to use keccak256() in their code. This ECIP could open the possibility of smart contracts being able to evaluate chain state, and simplify second layer (L2) development. We recommend an op-cod / pre-compile be implemented in Solidity to facilitate this.

  • Ease of use in consumer processors. Keccak-256 is far more efficient per unit of hash than Ethash is. It requires very little memory and power consumption which aids in deployment on IoT devices.

Rationale

Reason 1: Similarity to Bitcoin

The Bitcoin network currently uses the CPU-intensive SHA256 Algorithm to evaluate blocks. When Ethereum was deployed it used a different algorithm, Dagger-Hashimoto, which eventually became Ethash on 1.0 launch. Dagger-Hashimoto was explicitly designed to be memory-intensive with the goal of ASIC resistance [1]. It has been provably unsuccessful at this goal, with Ethash ASICs currently easily available on the market.

Keccak-256 is the product of decades of research and the winner of a multi-year contest held by NIST that has rigorously verified its robustness and quality as a hashing algorithm. It is one of the only hashing algorithms besides SHA2-256 that is allowed for military and scientific-grade applications, and can provide sufficient hashing entropy for a proof of work system. This algorithm would position Ethereum Classic at an advantage in mission-critical blockchain applications that are required to use provably high-strength algorithms. [2]

A CPU-intensive algorithm like Keccak256 would allow both the uniqueness of a fresh PoW algorithm that has not had ASICs developed against it, while at the same time allowing for organic optimization of a dedicated and financially committed miner base, much the way Bitcoin did with its own SHA2 algorithm.

If Ethereum Classic is to succeed as a project, we need to take what we have learned from Bitcoin and move towards CPU-hard PoW algorithms.

At first, most users would run network nodes, but as the network grows beyond a certain point, it would be left more and more to specialists with server farms of specialized hardware. - Satoshi Nakamoto (2008-11-03) [3]

Note: Please consider this is from 2008, and the Bitcoin community at that time did not differentiate between node operators and miners. I interpret "network nodes" in this quote to refer to miners, and "server farms of specialized hardware" to refer to mining farms.

Reason 2: Value to Smart Contract Developers

In Solidity, developers have access to the keccak256() function, which allows a smart contract to efficiently calculate the hash of a given input. This has been used in a number of interesting projects launched on both Ethereum and Ethereum-Classic. Most Specifically a project called 0xBitcoin [4] - which the ERC-918 spec was based on.

0xBitcoin is a security-audited [5] dapp that allows users to submit a proof of work hash directly to a smart contract running on the Ethereum blockchain. If the sent hash matches the given requirements, a token reward is trustlessly dispensed to the sender, along with the contract reevaluating difficulty parameters. This project has run successfully for over 10 months, and has minted over 3 million tokens [6].

With the direction that Ethereum Classic is taking: a focus on Layer-2 solutions and cross-chain compatibility; being able to evaluate proof of work on chain, will be tremendously valuable to developers of both smart-contracts and node software writers. This could greatly simplify interoperability.

Implementation

Example of a Smart contract hashing being able to trustlessly Keccak-256 hash a hypothetical block header.
example

Here is an analysis of Monero's nonce-distribution for "cryptonight", an algorithm similar to Ethash, which also attempts to be "ASIC-Resistant" it is very clear in the picture that before the hashing algorithm is changed there is a clear nonce-pattern. This is indicative of a major failure in a hashing algorithm, and should illustrate the dangers of disregarding proper cryptographic security. Finding a hashing pattern would be far harder using a proven system like Keccak-256:

example

Based on analysis of the EVM architecture here there are two main pieces that need to be changed:

  1. The Proof of work function needs to be replaced with Keccak-256
  2. The Function that checks the nonce-header in the block needs to know to accept Keccak-256 hashes as valid for a block.

example

A testnet implementing this ECIP, is currently live, with more information available at Astor.host

  • Official Keccak Team Implementation Document for Hardware and Software. Located here
  • Node Implementation (based on Parity). Located here
  • Keccak-256 CPU Miner. Located here
  • Block Explorer. Located here
  • Live Network Stats. Located here

References:

  1. https://github.com/ethereum/wiki/wiki/Dagger-Hashimoto#introduction
  2. https://en.wikipedia.org/wiki/SHA-3
  3. https://satoshi.nakamotoinstitute.org/emails/cryptography/2/
  4. https://github.com/0xbitcoin/white-paper
  5. 0xBTC Smart Contract  EthereumCommonwealth/Auditing#102
  6. https://etherscan.io/address/0xb6ed7644c69416d67b522e20bc294a9a9b405b31

Previous discussion from Pull request

example
example
example
example
example
example

@realcodywburns realcodywburns added type: std-core ECIPs of the type "Core" - changing the Classic protocol. status:1 draft ECIP is in draft stage an can be assigned ECIP number and merged, but requires community consensus. labels Jan 17, 2019
@p3c-bot
Copy link
Contributor Author

p3c-bot commented Jan 27, 2019

Work has officially begun on Astor testnet - a reference implementation of an Ethereum Classic Keccak256 testnet. Any help is appreciated.

Astor Place Station in New York is one of the first subway stations in the city, and we plan the testnet to be resiliant, while also delivering far increased performance by changing out the overly complicated Ethash proof of work algorithm.

@realcodywburns
Copy link
Member

"I think the intent of this ECIP is to just respond with an ECIP because the ECIP knowingly isn't trying to solve the problems of the claimed catalyst (51 attack). ETC can change it's underwear in some way but it has to have some type of super power than 'just cause'. I reject." - @stevanlohja #8 (comment)

@Harriklaw
Copy link

First and most crucial question : Do we need an algo change? How an algo change could help us?For me there are two aspects that should be examined at the same time. The first one, is how much secure is the new POW vs the old one. As you nicely wrote,any well examined algo as keccak256 is both scientifically reviewed and as the successor of SHA2 has high propability to succeed as SHA2 did with bitcoin. This can be controversial tho, so this article can strengthen the pros of keccac it is considered that may be quantum resistant: https://eprint.iacr.org/2016/992.pdf
"Our estimates are by no means a lower bound, as they are based on a series
of assumptions. First, we optimized our T-count by optimizing each component
of the SHA oracle individually, which of course is not optimal. Dedicated opti-
mization schemes may achieve better results. Second, we considered a surface
code fault-tolerant implementation, as such a scheme looks the most promising
at present. However it may be the case that other quantum error correcting
schemes perform better. Finally, we considered an optimistic per-gate error rate of about 10^−5
, which is the limit of current quantum hardware. This number will
probably be improved in the future. Improving any of the issues listed above will
certainly result in a better estimate and a lower number of operations, however
the decrease in the number of bits of security will likely be limited"
The second aspect we should examine is how the algo change will influence decentralization and this topic is more controversial. As economics are the most decesive factor for ASIC development ,(assuming that ETC will be valuable ),that will lead to new asics very soon. For me the real question is : how soon? And the answer is clearly hypothetical. Why this is a crucial question? First of all if already asics exist that would be unfair and centralized for the interval that new companies find and evolve their own heardware. If this is not the case, companies that already produce sha2 and other CPU intensive algos asics will eventually produce sha3 very fast as they already have the "know how " and have learnt how to adapt in this hardware/algo chase game very well. But do we want that? Do we want big asic companies to have the upper hand on ETC mining hardware production?If we accept decentralization is already well established among the crypto hardware industry( meaning asic companies)and many companies already joined the space ,then decentralization for sha3 will be achieved soon. But if we accept that gpu industry is a better space for our community (for decentralization purposes) then we should consider that any kind of algo change to cpu intensive algo will provoke massive change to our miners and mining ecosystem. Ethash compared to keccak is memory intensive ,and gpus are pretty much compatitive to asics right now: 1)efficiency: rx 580 =3.33 w/mh and a10= 1,75, 2) 2)price : rx 580 =150$ ( 5$/mh) and a10= 5600$ ( 11$/ mh)
So the real question is pretty much equal to this: cpu intensive vs memory intensive?gpus+ asics or asics? btc or etc is more decentrilized ? I think as for now gpus+ asics in ethash ecosystem make a helaty environement for decentralization hash power. Although btc seems to be well decentralized too.
Conclusion: for me an algo change will be profitable long term as keccak256 seems to be superior than Ethash in terms of security. Nevertheless, ethash seems to be superior in terms of decentralization. Short term we should consider other ways to reduce the risk for a future "51% attack" and allow the crypto mining industry to mature more. That would lead to a more decentralized mining hardware industry and consort with our optimal mining vision of a better decentralized eco.

@p3c-bot
Copy link
Contributor Author

p3c-bot commented Apr 4, 2019

Thank you for your post @Harriklaw. The plan for this switch is to create a SHA3 testnet first, for miners and hardware manufacturers to use, become comfortable with, and collect data on. Once we start seeing Flyclients, increased block performance, and on-chain smart contracts that verify the chain's proof of work, the mining community will see the tremendous value of this new algorithm and support a change.

RE: decentralization. I consider Ethash to already be ASIC'd, and as ETC becomes more valuable it will be less possible to mine it from a GPU anyway. The concern is that right now, Ethash is so poorly documented, only 1 or 2 companies knows how to build a profitable ASIC for it. However, with SHA3, it is conceivable that new startups, and old players (like Intel, Cisco, etc.) would feel comfortable participating in the mining hardware market since they know the SHA3 standard is transparent, widely used, and has other uses beyond just cryptocurrency.

SHA3 has been determined to be 4x faster in hardware than SHA2, so it is conceivable an entirely new economy can be created around SHA3 that is different than SHA2, similar to how the trucking market has different companies than the consumer car market.

@saturn-network
Copy link

Re: Quantum resistance of hash functions

  1. By the time it is possible to build a quantum computer that can crack keccak256 (sha3) there will be another generation or two of hash functions (think sha4 and sha5).
  2. Elliptic curve cryptography in Ethereum's private/public keys (for the vast majority of cryptocurrencies, really, including ETH BTC ETC TRX...) will be cracked much sooner than that. Who cares about mining crypto when you can literally steal other people's money (i.e. steal Satoshi's bitcoin).

I do not think we should worry about quantum resistance in this ECIP.

@saturn-network
Copy link

@p3c-bot frankly, we might even see sha3 ASICs embedded in desktop and mobile processors. In fact, SHA256 already has optimized instructions on ARM and Intel. Chances of Ethash instructions in ARM and Intel are slim to none at this point.

@zmitton
Copy link
Contributor

zmitton commented Apr 14, 2019

In the process of creating an ETC FlyClient, I have run into major blockers that can be eliminated if 1049 (this ECIP) is adopted.

Basically verification right now, cannot be done without some serious computation. The main issue is Ethash requiring the generation of a 16mb pseudorandom cache. This cache changes about once a week, so verifying the full work requires doing it many times. I have touched many creative solutions to this, but I believe we are stuck at light-client verification taking at least 10 minutes on a phone.

By contrast, with this ECIP, plus FlyClient (ECIP-1055), Im confident full PoW can be done in less than 5 seconds. This would open the door to new UX design patterns.

@p3c-bot p3c-bot changed the title ECIP-1049: Change the ETC Proof of Work Algorithm to Keccak256 ECIP-1049: Change the ETC Proof of Work Algorithm to SHA3 (Keccak256) Oct 14, 2019
@p3c-bot
Copy link
Contributor Author

p3c-bot commented Oct 14, 2019

This standard uses the following Keccak256 control hash - if a device can produce this hash it will work for ECIP1049:

keccak256("ETC")= 49b019f3320b92b2244c14d064de7e7b09dbc4c649e8650e7aa17e5ce7253294"

control

@AndreaLanfranchi
Copy link

AndreaLanfranchi commented Oct 28, 2019

In the current Ethash system, the mixHash is a 256-bit string constructed based on the state of the blockchain. This is concatenated with the nonceHeader, 64-bit, and the entirety (320-bits) of it is hashed to verify proof of work.

Not completely accurate :

  1. Miners receive the header hash which is a hash of candidate block state (not the state of the chain)
  2. Header hash is combined with nonce to fill the initial state of keccak function
  3. Initial state goes through a first round of Keccak
  4. Generated (from point 3) mix is FNV'ed against 64 pseudo random accesses to DAG
  5. Output is then copied into state and processed through an additional round of Keccak
  6. Resulting dwords[0-3] are checked against target

For this proposal we recommend miners being able to fill the mixHash field with whatever data they desire. This will allow for a total of 320-bits for miners to use for both submitting proof of work, but also to signal mining pools and voting on certain ECIP proposals.

Unless I miss something how the proof of work is supposed to be verified ?
This should imply sending the work provider (the node or pool) full initial mix (as composed by miner) plus both the final target and the final state of keccak: by consequence network traffic among work-consumers (miners) and work-providers (node/pools) is more than quadrupled with non trivial problems especially on pool's sides.

@AndreaLanfranchi
Copy link

AndreaLanfranchi commented Oct 28, 2019

@p3c-bot

The concern is that right now, Ethash is so poorly documented, only 1 or 2 companies knows how to build a profitable ASIC for it.

The "lack" of documentation for ethash is pure fallacy.
The algorithm is as well documented as it relies on the same SHA3: thus if enough documentation on SHA3 then enough documentation on ethash where the "only" addition is DAG (generated also using SHA3) and DAG accesses.
Its all described here https://github.com/ethereum/wiki/wiki/Ethash

Anyone with basic programming skills can build a running implementation in the language they prefer.

ASIC makers never had problems in "understanding" the algo (which also has a widely used open-source implementation here https://github.com/ethereum-mining/ethminer) and there is no "secret" behind. The problem of ASICs has always been how to overcome the memory hardness barrier: but this has nothing to do with the algo itself rather with how ASICs are built.

P.s. Before anyone argues about SHA3 != Keccak256 please recall that Keccak allows different paddings which do not interfere with cryptographic strength of the function. SAH3 and Keccak246 (in ethash) are same keccak with different paddings.

@zmitton
Copy link
Contributor

zmitton commented Oct 28, 2019

Agree that ethash being undocumented is not the best argument. It is however, significantly more complex (being a layer atop keccak256).

A bigger problem is that it doesn't achieve its intended goal of ASIC resistance or won't for much longer (as predicted here)

Also it is incredibly easy to attack since there is so much "dormant" hash power.

@AndreaLanfranchi
Copy link

@zmitton
I think the DAG layer is really simple instead, but is my opinion.

I think we may agree that "ASIC resistance" is not equal to "ASIC proof".
Giving the latter is utopia (provided there are enough incentives) I think ethash is still the best "ASIC resistant" algo out there: efficiency increases (nowadays) are stil in the range of less than two digits. Its resistance is inversely proportional to on-die memory cost for ASICs. That's it.
That's why for ethereum has been proposed an alterative (which I won't mention) to further increase memory hardness.

"Dormant" hashpower is not an issue imho and don't think is enough to vector an attack given the fact is still predominantly GPU (yet not for long).

@zmitton
Copy link
Contributor

zmitton commented Oct 29, 2019

(cross posting as i see discussion section has changed):

I have a low-level optimization for the ECIP. It would be preferable to use the specific format (mentioned to Alex at the summit)

// unsealedBlockheader is the blockheader with a null nonce
digest = keccak256(concat(keccak256(unsealedBlockheader),32ByteNonce))
// a "winning" digest is of course the thing that must start with lots of leading zeros
// the "sealed" header is then made by inserting the nonce and re-RLP-encoding
  • This optimizes the size of a PoW to be 64 bytes instead of the current 400+ bytes (because the PoW verification only requires the 2 items that were concatenated above)
  • It also ensures that the dedicated hardware/software is optimizing specifically keccak, because creating each new "guess" requires the minimal number of "non-keccak" steps (swapping the 32 byte nonce of a 64 byte bytearray). If the nonce was instead just 1 of the rlp items in the header, then creating another guess would entail a new RLP encoding step (of 400+ bytes) for each additional guess. ASICS would have to design RLP into the hardware to compete. Also selfish mining could become an advantage strategy since block headers can vary in size and larger headers would then take longer to mine on.

@AndreaLanfranchi
Copy link

Also selfish mining could become an advantage strategy since block headers can vary in size and larger headers would then take longer to mine on.

Not sure what you mean here: block header is a hash with fixed width.

@zmitton
Copy link
Contributor

zmitton commented Oct 29, 2019

larger input size, not output

@AndreaLanfranchi
Copy link

Can't code, so I am forced to rely on the trusted third party devs and documentation as to the security of SHA3.

To be extremely clear SHA3 has been in ethash algorithm since its birth.
Sha3 in ethash is called Keccak but the two terms are synonims. There is a slight difference SHA3 vs Keccak due to padding of output but the two functions are the same and rely on the same cryptographical strength.

Ethash algorithm is : Keccak256 -> 64 rounds of access to DAG area -> Keccak256.

This proposal introduces nothing new unless (but is not clearly stated) is meant to remove the DAG accesses and eventually reduce Keccak rounds from 2 to 1. I have to assume this as the proponent says Ethash (Dagger Hashimoto) is memory intensive while SHA3 would not be,

Under those circumnstances the new proposed SHA3 algo (which is wrong definition as SHA3 is simply an hash function accepting any arbitrary data as input - to define an algo you need to define how that input is obtained) the result would be;

  • A new PoW algo which differentiates ETC fom ETH
  • An algo easily and quickly implementable in ASICs definitely tombstoning GPU mining on ETC

@BelfordZ
Copy link
Member

BelfordZ commented Nov 7, 2019

I will never support a mining algorithm change, regardless of technical merits.

I also refuse to spend more time above writing this comment on the matter. I have read all of the above discussion, reviewed each stated benefit and weakness, and thought long and hard about as many of the ramifications of this as possible. While each benefit on its own can be nitpicked over, having it's 'value added' objectively disseminated, there is 1.5 reasons that trump all benefit. It's an unfortunate reality of the world and humanity.

The main point is that ruling out collusion being a driving force behind any contribution is impossible. This is especially true the closer the project gets to being connected with financial rewards. Every contribution has some level of Collusive Reward Potential. A change that adds a new opcode has a much higher CRP than fixing a documentation typo. Ranking changes with the highest CRP, my top 3 would be:

  1. Mining algorithm changes ('fair launch' being the oxymoron that we would be, for the 2nd time, subjected to)
  2. Consensus changes (blacklisting addresses, dependence on anything even remotely centralized for block validation)
  3. Protocol defined Peering rules (ie drop a peer if they support protests in HK type of rules)

So, going back to the 1.5 reasons that trump all...

1 - To explain by counterposition, let's assume I was a large supporter of a mining algo change. What's to say I've not been paid by ASIC maker xyz to champion this change, giving them the jump on all other hardware providers?

Spoiler: nothing.

0.5 - How can something which is designed to be inefficient be changed in the name of efficiency WITHOUT raising suspicion?

Spoiler: it can't.

To conclude, this might be a great proposal... for a new blockchain... And I urge you to reconsider this PR, as I believe there are more useful ways of spending development efforts.

@drd34d
Copy link

drd34d commented Nov 7, 2019

I share a similar opinion as @BelfordZ on this subject.

Motivation
A response to the recent double-spend attacks against Ethereum Classic. Most of this hashpower was rented or came from other chains specfically Ethereum (ETH). A seperate proof of work algorithm would encourage the development of a specialized Ethereum Classic mining community, and blunt the ability for attackers to purchase mercenary hash power on the open-market.

"most of this hashpower was rented.." - what's the source of this assessment?

"would encourage the development of a specialized Ethereum Classic mining community" - a new and specialized mining community sounds like we could be talking about a newer and smaller community and probably less security?

The risk is too high and the threat isn't exactly there.
A double spend attack as you know is not exactly direct attack to the network but to the participants who do not take the necessary precautions (confirmation time). I have to admit, though, that the current recommended confirmations for bigger transactions are nerve-racking.

@phyro
Copy link
Member

phyro commented Nov 9, 2019

Here's my current view on this proposal. This won't solve 51% attacks because they can't really be solved. I do agree that having a simpler/minimalistic and more standard implementation of hashing algorithm decreases the chances of a bug being found (yes, ethash working for the last 5 years tells nothing, we've seen critical vulnerabilities discovered in OpenSSL after more than 10 years). On the other hand, we have no guarantee ETC will end up as the network with the most sha3 hash rate. Even if we did in the beginning, it doesn't mean we can sustain the first place. If we fail to do that, it's no different than ethash from this perspective.
The second advantage that sha3 has over ethash is faster verification which enables things like FlyClient implementation (light clients for mobile that can verify the chain is connected without downloading all the block headers). I was talking to @Wolf-Linzhi about this and maybe there could be ways to make verification easier by modifying the epoch lengths or whatever to make the verification faster. At the time we did not know and I still don't, just saying that maybe there are modifications that make verification faster on ethash.

The last thing I want to mention is that making an instant switch actually opens up an attack vector. Nobody knows what hash rate to expect at the block that switches to sha3. This means that exchanges should not set the confirmation times to 5000 but closer to 50000. This makes ETC unusable and we should avoid such cases if possible. In case the community agrees to switch to sha3, we should consider having a gradual move to sha3 where we set difficulties accordingly so that 10% of blocks are mined with sha3 proof and 90% with ethash. Over time the percentage goes in favor of sha3 for example after half a year it is as 20% for sha3 etc. This makes the hash rate much more predictable compared to the instant switch and the exchanges would not need to increase the confirmation times to protect themselves because of the unknown hash rate. I don't know whether this is easy to implement, I imagine we could have 2 difficulties (each hashing algorithm having its own) but I'm not that familiar with the actual implementation and possible bad things this could bring.

@gitr0n1n
Copy link
Contributor

gitr0n1n commented Sep 4, 2020

Adding some more questions related to how the network will handle this chain split.

@bobsummerwill please chime in as i know you're very active in privat emessages right now.

  • What do you want us to start calling your fork chain?

  • Will Grayscale be adding your new SHA3 forked chain to their trust along side Ethash Ethereum Classic?

  • Will Gitcoin be integrating with your forked version of ETC?

  • What exchanges are in support of adding your forked version so ETC users know which markets they can sell their liquidity post chain split?

Interoperability Questions:
It's my understanding SHA3 will not be interoperable with any EVM unless that EVM adds a special SHA3 OpCode for your forked chain.

  • Have you asked which EVMs will add this code to become interoperable with your forked chain?

  • Has ETH commited to being interoperable with the new SHA3 forked chain?

@developerkevin
Copy link
Member

@zmitton I'm sure you can join the working group. What is your DIscord screen name?

@q9f
Copy link
Contributor

q9f commented Sep 5, 2020

The EVM does not allow you to tweak the way the opcode works. You need a totally new one to get sha3.

I can address this.

@Spaceguide
Copy link

The only thing you will accomplish, is moving ETC China centered, FGPA and ASIC ruled, and enough hashrates avail, for attacks

@TheEnthusiasticAs
Copy link
Member

TheEnthusiasticAs commented Sep 6, 2020

  1. The objection argument "the centralization by one ASIC manufacturer":
    If there will be a demand for such devices, other companies will start to manufacture these, too. It is how a market work. Supply and demand. As it is easy to produce, based on the argument of the original championer of the SHA3/Keccak for the Ethereum Classic, the prices would also be relatively affordable.

  2. "The IoT-goal of this project":
    The aim of this project is to be IoT suitable. For it should have the highest avaible security standard. The security standard of the Ethash is not enough based on the recent news provided by the original championer of the SHA3/Keccak for the Ethereum Classic. This means, that this project would remain as a hobby project and people invested in this project with their time and money will be down the drain.

  3. The objection "it does not protect against the relatively large reorganizations in the network":
    Nobody said, that it will. What was said, is, that it will give a chance to be a major blockchain with this algorithm. This project is popular. Higher the algo standard, more attractive it will get, more (serious) people would be interested to do on/with this project, value would go higher, more miners would join, more secure the network would be, fewer the chance the network would have relatively large reorganizations.

@796F
Copy link

796F commented Sep 7, 2020

  • GPUs

Already heavily centralized, enough to the point that one could rent some hash-power and attack ETC. changing from one GPU algorithm to another will not fix this problem, people rented hardware-time, not algo-time. They could easily (if not even easier) rent enough hardware to attack the new GPU algorithm.

  • FPGAs

Ironically, the FPGA supply-chain is actually the least centralized out of all hardware (cpus, asics, gpus). Vast majority if FPGA mining is done with recycled chips from 2-3 years ago, hundreds of vendors in China, Eastern Europe, and Southeast Asia have access, and there's a very healthy market of supply and demand that keeps one party from dominating. The main manufacturers of these chips do not supply the mining market for two key reasons.

First off, FPGA performance improvement per tech-node is not 10x or 20x like with ASICs. A 28nm FPGA goes toe-to-toe with a 16nm FPGA. Secondly, COGS for higher tech-node chips is higher than the market sale price of recycled chips.

  • ASICs

Given that it will be 1 year before this change hits the network, there's plenty of time for one or two ASIC manufacturers to mass produce an ASIC for ETC. Shuttles (small runs) of 28 - 40 nm chips typically take 5-6 months to complete, after which mass production is a given. The main risk however is the fork, which is an uncertainty that dictates how much of the network follows ETH, and how much follows SHA. ASIC vendors may back out or hesitate given this risk.

Would love to be involved with the technical discussions.

@q9f q9f pinned this issue Sep 10, 2020
@q9f q9f unpinned this issue Sep 11, 2020
@p3c-bot p3c-bot changed the title ECIP-1049: Change the ETC Proof of Work Algorithm to the SHA3 Standard ECIP-1049: Change the ETC Proof of Work Algorithm to Keccak-256 Sep 20, 2020
@sjococo
Copy link

sjococo commented Sep 29, 2020

ETC is Ethereum Classic, forked from original ETH. It uses EThash with a DAG file. The DAG file size is within 2 months of reaching 4 GB. At that size a lot of GPU's and ASICs will be obsolete.

A 51% attack is less likely to happen when it is expensive. We are so close to make the cost more expensive and you want to throw that advantage away? It doesn't make sense at all.

Better let it be for the moment and let's see what happens after all 4 GB GPUs and ASICs become obsolete for ETC. This will happen before they become obsolete for ETH, so you can see what will happen to ETC hashrate and ETH hashrate and the consequences for difficulty, price and miners reaction, how traders react to the new conditions and so on.

@iquidus
Copy link
Contributor

iquidus commented Sep 30, 2020

This proposal currently appears incomplete.

It proposes changing ethash (a complete consensus engine) for keccak-256, a hashing function, and does not at all go in to detail regarding the rest of the consensus engine. As is, blocktimes, monetary policy, hashing function(s), difficulty algo, ghost protocol/uncles, DAG/caches, block validation and sealing are all part of the ethash consensus engine. Disabling or removing ethash, disables/removes ALL of this.

If the proposal is proposing to replace ethash, it should cover the entire engine, not just one componenet of it. Otherwise it should be proposed as a modification to the existing engine (ethash).

@jean-m-cyr
Copy link

It proposes changing ethash (a complete consensus engine) for keccak-256, a hashing function, and does not at all go in to detail regarding the rest of the consensus engine. As is, blocktimes, monetary policy, hashing function(s), difficulty algo, ghost protocol/uncles, DAG/caches, block validation and sealing are all part of the ethash consensus engine. Disabling or removing ethash, disables/removes ALL of this.

Nonsense! Replacing the PoW algorithm (as opposed to misleadingly suggesting its removal) in no way affects all these other buzz-words you throw in willy nilly.

@iquidus
Copy link
Contributor

iquidus commented Sep 30, 2020

It proposes changing ethash (a complete consensus engine) for keccak-256, a hashing function, and does not at all go in to detail regarding the rest of the consensus engine. As is, blocktimes, monetary policy, hashing function(s), difficulty algo, ghost protocol/uncles, DAG/caches, block validation and sealing are all part of the ethash consensus engine. Disabling or removing ethash, disables/removes ALL of this.

Nonsense! Replacing the PoW algorithm (as opposed to misleadingly suggesting its removal) in no way affects all these other buzz-words you throw in willy nilly.

difficulty algo: https://github.com/etclabscore/core-geth/blob/master/consensus/ethash/consensus.go#L308
monetary policy: https://github.com/etclabscore/core-geth/blob/master/consensus/ethash/consensus_classic.go#L27

buzzwords.....

EDIT: whether this proposal introduces a new consensus engine (along side clique and ethash), or simply modifies the existing engine (ethash), is sorta crucial for implementation. No evm based chains have changed consensus engines mid chain. It's uncharted terroritory requiring massive refactoring of client software, as this behaviour was never intended. A very different scenario to modifying the existing ethash engine.

@jean-m-cyr
Copy link

There is no "massive refactoring" of clients required. The PoW verification code is a localized function in all clients. On the fly PoW algo swaps are not uncharted territory! Why even mention the EVM? The consensus algorithm is unchanged. Difficulty adjusts for constant block time regardless of hash rate. What else?

@iquidus
Copy link
Contributor

iquidus commented Sep 30, 2020

I'm aware many bitcoin forks, based on bitcoins code, have performed these kinds of changes, but can you show an example of an ethereum based chain doing the same? They function quite differently...

Regarding refactoring. At least in terms of core-geth (which makes up majority of the nodes on network). All logic assumes a single chain, uses a single engine. e.g https://github.com/etclabscore/core-geth/blob/master/core/block_validator.go#L37
A single chain, with multiple engines, and a change over, would require significant refactoring.

It would be less intrusive as a modification to the existing engine, imo.

@zmitton
Copy link
Contributor

zmitton commented Sep 30, 2020

The specification is quite clear that this only changes the hashing algorithm. In case you truly are confused and not just trolling: All the other consensus properties remain as-is

@iquidus
Copy link
Contributor

iquidus commented Sep 30, 2020

I'm referring to implementation, which happens to be key here, given a hard fork is required, and in which the proposal is lacking in much information in terms of the changeover.

  • Activation Block: 12,000,000 (approx. 4 months from acceptance - January 2021)
  • If not activated by Block 12,500,000 this ECIP is voided and moved to Rejected.
  • We recommend difficulty be multiplied 100 times at the first Keccak-256 block compared to the final Ethash block. This is to compensate for the higher performance of Keccak and to prevent a pileup of orphaned blocks at switchover. This is not required for launch.

A consensus breaking change such as multiplying the difficulty by 100x, can not be a recommendation, is it part of the specification or not?

EDIT: Such an intervention, would result in the chain having an artificial weight, given chains compete by weight, this recommendation breaks everyones beloved nakamoto consensus, giving the 1049 chain, a significant and unfair advantage over the non 1049 chain. Also, why 100x, how was this mysterious figure derived?

Given the scope of the change (requires hard fork) I would like to see more information on how the specification is intended to be implemented on a live network that currently uses ethash as the consensus engine. The only reference client provided, doesnt handle a changeover, and is based on a client that no longer supports the network. https://github.com/antsankov/parity-ethereum/blob/sha3/astor.json#L5 This would imply we are simply modifying the ethash engine, yet language like "the final Ethash block" would imply it's being replaced.

I, as a client dev, am seeking clarity as the proposal, as is, leaves me with questions on the intended implementation.

EDIT: Also, there's only activation blocks for mainnet, I assume given this is a PoW related hardfork, that it is intended to be activated on mordor first?

@zmitton
Copy link
Contributor

zmitton commented Oct 1, 2020

Thank you for the clarification, those are valid concerns.

@TheEnthusiasticAs
Copy link
Member

TheEnthusiasticAs commented Oct 3, 2020

I am forwarding from a miner, Victor, as he is not so good in English:
After 2 months, video cards with 4GB will stop mining Ethereum (ETH), and this is 30% of the total capacities, which, with a decrease in DAG, can successfully mine the Ethereum Classic for another 3 years, with the transition of Ethereum (ETH) to PoS - the remaining capacities (270TH/s) can join the classic network.
image

Even with 30% of the capacity, this is 70-90TH/s, which in itself will complicate attacks 51%. in my opinion, the maximum power of the classic network was only 16TH/s, at the moment it is 3.5TH/s - very little, but this allows you to mine 2-3 times more coins.

@tromp
Copy link

tromp commented Oct 4, 2020

All "ASIC-resistant" coins end either with ASICs (ethereum, litecoin, decred, dash, grin, etc.etc.etc.)

That's entirely misleading regarding Grin. Grin has dual PoW; one ASIC-friendly (Cuckatoo), and one ASIC-resistant (Cuckaroo) for the first two years only. The latter is tweaked every 6 months, and has never had ASICs developed for it. All Grin ASIC development has been for the ASIC friendly variant.

I agree that without frequent tweaking, no PoW can remain ASIC resistant...

@p3c-bot
Copy link
Contributor Author

p3c-bot commented Nov 20, 2020

Conversation has been moved, and the proposal re-submitted in its exact same form, but without an implementation block number here: #394

We invite everyone to continue the debate about this proposal on the new thread.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
status:5 last-call ECIP has been accepted and is waiting for last-call reviews. type: std-core ECIPs of the type "Core" - changing the Classic protocol.
Projects
None yet
Development

No branches or pull requests