Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

eth, eth/downloader,p2p: reserve half peer slots for snap peers during snap sync #22171

Merged
merged 4 commits into from
Jan 25, 2021

Conversation

holiman
Copy link
Contributor

@holiman holiman commented Jan 14, 2021

This PR attempts to reseve half the peer slots for snap/1 peers, if we're actively trying to perform a snap sync.

Some caveats

  • If we're just running --snapshot, but syncmode=fast, then it should do nothing
  • If we're running --snapshot, andsyncmode=snap, but are done syncing, it should do nothing

@@ -289,6 +289,10 @@ func (d *Downloader) Synchronising() bool {
return atomic.LoadInt32(&d.synchronising) > 0
}

func (d *Downloader) SnapsyncInProgress() bool{
return d.snapSync && d.Synchronising()
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hm, this probably won't work, becase synchronizing is only set after we find a peer to start syncing against

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Checking the d.snapSync should be sufficient, it's just that we currently don't set it to 0 after the sync is done

@holiman
Copy link
Contributor Author

holiman commented Jan 14, 2021

Deployed this on mon02 now,

> var i = 0; admin.peers.forEach(function(p){console.log((i++) +" " +( p.network.inbound ? "inbound  " : "outbound ") + p.caps)})
0 outbound eth/63,eth/64,eth/65
1 inbound  eth/63,eth/64
2 inbound  eth/63,eth/64
3 inbound  eth/63,eth/64,eth/65
4 inbound  eth/63,eth/64
5 outbound eth/63,eth/64,eth/65
6 outbound eth/63,eth/64,eth/65
7 inbound  eth/63,eth/64,eth/65
8 inbound  eth/63,eth/64,eth/65
9 outbound eth/63,eth/64,eth/65
10 inbound  eth/63,eth/64,eth/65
11 inbound  eth/63,eth/64,eth/65
12 inbound  eth/62,eth/63,eth/64,eth/65
13 outbound eth/63,eth/64,eth/65
14 outbound eth/63,eth/64,eth/65
15 inbound  eth/63,eth/64
16 inbound  eth/63,eth/64,eth/65
17 outbound eth/63,eth/64,eth/65
18 outbound eth/63,eth/64,eth/65
19 outbound eth/63,eth/64,eth/65
20 inbound  eth/63,eth/64
21 inbound  eth/63,eth/64,eth/65
22 outbound eth/63,eth/64,eth/65
23 outbound eth/63,eth/64,eth/65
24 inbound  eth/63,eth/64,eth/65,les/2,les/3
25 inbound  eth/63,eth/64,eth/65

=> 25 (ish) non-snap peers

@holiman
Copy link
Contributor Author

holiman commented Jan 14, 2021

One snap peer found

> var i = 0; admin.peers.forEach(function(p){console.log((i++) +" " +( p.network.inbound ? "inbound  " : "outbound ") + p.caps)})
0 outbound eth/63,eth/64,eth/65
1 inbound  eth/63,eth/64
2 inbound  eth/63,eth/64
3 inbound  eth/63,eth/64,eth/65
4 inbound  eth/63,eth/64
5 outbound eth/64,eth/65,les/2,les/3,snap/1
6 outbound eth/63,eth/64,eth/65
7 outbound eth/63,eth/64,eth/65
8 inbound  eth/63,eth/64,eth/65
9 inbound  eth/63,eth/64,eth/65
10 outbound eth/63,eth/64,eth/65
11 inbound  eth/63,eth/64,eth/65
12 inbound  eth/63,eth/64,eth/65
13 inbound  eth/62,eth/63,eth/64,eth/65
14 outbound eth/63,eth/64,eth/65
15 outbound eth/63,eth/64,eth/65
16 inbound  eth/63,eth/64
17 inbound  eth/63,eth/64,eth/65
18 outbound eth/63,eth/64,eth/65
19 outbound eth/63,eth/64,eth/65
20 outbound eth/63,eth/64,eth/65
21 inbound  eth/63,eth/64
22 inbound  eth/63,eth/64,eth/65
23 outbound eth/63,eth/64,eth/65
24 outbound eth/63,eth/64,eth/65
25 inbound  eth/63,eth/64,eth/65,les/2,les/3
26 inbound  eth/63,eth/64,eth/65

...but gone one second later

@holiman holiman mentioned this pull request Jan 14, 2021
@holiman
Copy link
Contributor Author

holiman commented Jan 14, 2021

It finished eventually

Jan 14 20:15:48 mon02.ethdevops.io geth INFO [01-14|19:15:48.377] Fast sync complete, auto disabling
Jan 14 20:15:48 mon02.ethdevops.io geth INFO [01-14|19:15:48.377] Snap sync complete, auto disabling 

@holiman
Copy link
Contributor Author

holiman commented Jan 15, 2021

Later on, after the sync is done, we're back at 50 peers, so the throttle was successfully disabled eventually:

> var i = 0; admin.peers.forEach(function(p){console.log((i++) +"\t" +p.enode+"\t"+p.id+"\t"+( p.network.inbound ? "inbound \t" : "outbound\t") + p.caps )})
0	enode://38f5612e6fd59de92ca64473817b526d2da2c194a87f30e5f2be57f2346789f46b440a257130e2842290d0fb118fe47be941fdf3d6382ede280fb560deb3ae1e@208.38.225.46:40095	054789eacc58a387fa7667e4c32fc29f4f559138e76bcdd455d629c826ef646f	inbound 	eth/63,eth/64,eth/65
1	enode://2b262295607669d262cc210e2522bb4315104e66fc1e4a58e5ca7b81d36a4129092c9d95663ffa5f84ba3d3372b5f51978ab60e95e4b68d54ad1fd81be31e461@168.119.137.228:30303	06ef97b67ab7e7d07d999316da999be0337202cfe52c52f7c4b8bc1b81f8fe32	outbound	eth/63,eth/64,eth/65
2	enode://5d6d7cd20d6da4bb83a1d28cadb5d409b64edf314c0335df658c1a54e32c7c4a7ab7823d57c39b6a757556e68ff1df17c748b698544a55cb488b52479a92b60f@104.42.217.25:30303	0c6253589a8d244d57abc1c8ab298e048f24487d6572d7890e0cbb7b70039621	outbound	eth/64,eth/65,les/2,les/3,snap/1
3	enode://33013a978de73d9c980b00a8c1114e96d5391896038ecb56c5026ee7f2842c62bb5b58b1c7d1ff887fda887c00a0a957c5887eb9854e5858297f2dd856131df4@80.66.80.221:64200	0cad73f954a7ad7f82e6e9c91b042255538ba9398443f1e8ae2952e5752dc462	inbound 	eth/63,eth/64,eth/65
4	enode://34f6062c34affeeff783acb0d23d9e25e34a5bfef238cac8d035726481b7e9ebe1ad222e8225cf86519962ff1b4466f4184ab4d3c27f16645b6d2a312a1e06bf@3.94.100.225:30303	0f9ac104c26a5f83ce994f91a3c6eb35c545014afef9b75b1e0418f3a3e5d67c	outbound	eth/63,eth/64
5	enode://a492364d49348424ba74a676ced8a348068c677307bee7295ab8e852f2f41ec73fa88f1eb295f286c0de15d1a326c9d21a3814080fcd700ec80cd59bed5fe2f0@73.162.180.17:37366	0fd8a8ec1a80d8920fad621867c60c5441e448c5c680ec5d9d9a52bdd6c686a1	inbound 	eth/63,eth/64,eth/65
6	enode://6df85f12ef73b200ffcddb6bc531c43f165a76f0a993c969d437d4127271945efd4cc0a45836503ce1fc08a6c7b213e4dd3436261ae6b9135b012b28ec995338@87.134.225.211:30309	10505dc675a4683cb7167e48bf1bea09bdfd47c59b84d0e96aa86f04821e6125	outbound	eth/63,eth/64,eth/65
7	enode://cb7180a427e933b04ecff3ebd1348078d8ad92072c85d3428bc40623c17dabab4568cee6e6cb69588af343bc9458aa0be3cfcfed0eccbbfe0d69a1328a0596bf@138.197.128.169:41634	12ee7f0fc1517912c159ef2c60d12838a9298397f16aea05083d49d89ecb27a1	inbound 	eth/63,eth/64,eth/65
8	enode://2bab96861c9b9384fbc91ff8ec0ec72f5d3fb89802b01cc70bb99739e2bdf9a7e659de4cc9bdef72f2a3885e30a3440a705afabc4ffc8c2a9e76c2272c6fb570@54.196.158.87:53474	14cc442e91bf169f91eba953dbb91254a70bd197c1f5ff38a431d3d25447eb29	inbound 	eth/63,eth/64
9	enode://b26e09f4f490366707e4371bca9e8338ad3c4bf563cecd9dd3190884b455cb181341479e3241aad235d44238204df5f36d02983a35408b659e7472c53c5b5d19@54.221.156.60:36878	15265c73d385b8eb53ce42ff48431daab4583af8b5e37cf4db6bfd12d45a032e	inbound 	eth/63,eth/64
10	enode://65da2e6c63e493ae7801a43d7e6e7b9d9f43fb677ffec209bde291e0213faa926b3c25ac5a16b4be60d59ae388bb0a3f85850f2d5e7de7de8277016480dde7ee@209.50.49.96:58642	19ec277f3f3029227c0b380b9110423cf4bc3a0832ad30646e3c61064ba2d042	inbound 	eth/63,eth/64
11	enode://859e4427b0e81b9671ce0394280cdbec9f37756b88ba69f5a248cc89466c9e3e1013040d13635c6130281a3e1e683f362e8f20af2da0a104d1e9b07c2c091c24@80.84.57.38:30303	22ad3654c71a8a0921376ec94b9350787c3f71f7d90f843343d416159d59f2bf	outbound	eth/63,eth/64,eth/65
12	enode://0afaea566ba87bf44a90b2f4631de71d03c956403bd8f8c680666e9d5bff50c55c6d6e2a8aec3479c35bb73c8c06f0b102c89fda7d2ea3d4266150724de44f31@50.3.85.106:58352	24125bd649760503b9f5cf346bdc785f6c01ef4d51706958c0c024af1d8d97df	inbound 	eth/64,eth/65
13	enode://f9e2fa7344f4238a9c985224585459d8ab710a23d0dd2176c3743470dda1f3fbd8a83d5e194285f367c7d0c83c3a97c4c4529f94c29cdb6e674283ba75ce2f03@51.77.153.43:47724	2600843943492cdd60f56f4d1ad53567a380c6ba662674aeb3fcf3a6bd89d0e1	inbound 	eth/63,eth/64,eth/65
14	enode://279944d8dcd428dffaa7436f25ca0ca43ae19e7bcf94a8fb7d1641651f92d121e972ac2e8f381414b80cc8e5555811c2ec6e1a99bb009b3f53c4c69923e11bd8@35.158.244.151:30303	2d1f1ff2774352477bfbf6ee10868373b074e7cd678270608eccfb2205d0e874	outbound	eth/64,eth/65,les/2,les/3,snap/1
15	enode://730a9a475a7f1db63ddec6267ad393aca207106378b5a5154d18bb080d2a0f6873f372c1ec470b99983bc287c0e44781b2e9f6f0c1646ee82f0e0b2d88dda53e@47.242.31.168:30303	33e265b58ed590a07ab2c0db460fd834966f5933746d90a7df46841251deb214	outbound	eth/63,eth/64,eth/65
16	enode://c0f1d9e3a335a760ba72e9bb7243ac73976ba58a5f1095101eccdc18c5eaf06163143e4c59a093e4613e23eb4033532e30c6769f57aceab2c3103a299f45ed72@107.151.190.21:24722	3723ae2255ef5b50d28bc7f0e6be07e43ac5802b217e05c81ed2547fbbc40757	inbound 	eth/63,eth/64,eth/65
17	enode://951d140b1a00c2566ad0dcf72739353e0f8978d12fa0361178dfdb43c00a094bb9e82d268c1bd96dfc36ea310d23420385f6e9b6938f97cf7ac0c8c59330fcfe@45.119.97.204:52564	420d9a78268e069a3184ac0d10f21f4da9aca276a890c03ccc4c57f04b508494	inbound 	eth/63,eth/64,eth/65
18	enode://75bb5d5af507a1d3e44c21c6a319a664bd90b55f4262defbad2e39e20e362201029d671452f7f2171d8ea41529a35a37347d3176891e2b9132a917e41dd58be2@208.88.169.151:30303	56832f68ff0f8d9bfca71604cf5a3892bcaa414671e7ad5067584bb95c82e160	outbound	eth/63,eth/64,eth/65
19	enode://c79aaa14479060542b76f072fdd94b4b6749b2d7d91329480fc20490e9bd8ba4d8d4f4ea43ee8190d556b6d71df6cee09af5b2f44eb06c2756828027bd34c8d9@139.99.123.215:50534	5a7a8d05dae9f67e8ab21826351be9c29eaaad8c78ed3987c38f423840530a4b	inbound 	eth/63,eth/64,eth/65
20	enode://9c55714d9cbc696c69e7d7a7240215e5e7e2d745e204444e64d7eafdcb3adcdb36b1ea42e39cb0bd8fc2152e213ad5b79de6c2743e1f7ca3ed39232866fa6452@172.96.165.50:55128	7348b9b42b64639e151bb5a3276270dab9f8253dd383874ca4dad2d2757b68ea	inbound 	eth/63,eth/64,eth/65
21	enode://d145af809ba6c652e7f1d7a5d6e32da1f0ec3f0bf28e6e646e3c2a718e5908c38219d3d7c267320adb5dfaeb9fa7bd9150154ee9344b5b755bf14b63b3af0aaf@92.38.148.32:55964	773c71615ad457fcb084051fb4e8ae5b0a38e438b9dac17a8eea2c1372d02f06	inbound 	eth/63,eth/64
22	enode://9397b02021264b3c98d6081d6468805e2e0843e20bf84203cea42b74a16be1aff5ab4e043a9136337b27e14f100587c7364b811206898b0a6221981dd716fc59@95.111.13.56:30303	793e558a9166fc43c37db21c212175a4dc15dc8f1a2cd6dbf94111d7558148db	outbound	eth/63,eth/64,eth/65
23	enode://6649f8b3a77c65f4c1256d0d73c5f5e660ed803761a5f8f4df0f597dbc8905ca7a12dd8da7d6e4404f4b7b49925732d1a462b850ebe83479850da8ae95f87b9a@45.63.95.55:35566	7a5bc82a129b0e67e2c7c23431767e326e4d49e86e6a776bf4a31461b86ce06a	inbound 	eth/63,eth/64,eth/65
24	enode://37b7996433b690133afd4ac998889349ac13ab3ed97a16a43ed9511ca889737b88cb4ce882c84966776f133e86c443cde9a9572292d624b3048aed9a141c6330@47.89.218.150:30303	7e96b11b2dbe000283a7c4d8a292f38d9e3b29f069d97ea57135baa42d491404	outbound	eth/64,eth/65
25	enode://4373b938c18108313cb61ef33aa87a4ddf08b609a791df0f27ccb70a4c3cabcb7b1f0bc1da196f5320a91c265ba84653f1d638d34d048b105407ae0df5f05d9d@46.33.96.9:37692	7f29010b26faaf9e664bbdc006ef60af7cab5a15647229dff1cfa16034952e63	inbound 	eth/63,eth/64,eth/65
26	enode://9e2083709367de84fd3f72d7df2ee0e9a5c729ed084a6dc8542473c63f1d80a78362d8261b3b6ba1dc91eaa78cf14aa2e593eec6e92a3cb5201a2c59c18c2609@2.26.147.53:44654	82b9d16a1d1e65d69e0b34564f7ce921ded761725d7181b3f5c896438bd6ea34	inbound 	eth/63,eth/64,eth/65
27	enode://fc758b27376a5be89b28bda374e45ad42b95d8d84f92909017441528e2345df7af626a4dce12194c618d276d9613379663d7e3c6461656fb4787606dbb68300d@168.119.68.101:30303	831d2e0f3bab6e2dcccfeec819a384fc029100624de04bcfb38b5b9031f2ed9e	outbound	eth/63,eth/64,eth/65
28	enode://6a6b3b1e8d4674130447879e47a4827d33b68b31745e46d8beeb0637d3c6ca8e6c18dcfe7a0bb15eb7cff0fc98f260a1d20cc2677e8c82e974032ef05a615f37@108.60.71.73:36372	8ad8f9c8aa6bd27c2988c228ed933915d05fdbc08ea0517a52f2e11973ae7060	inbound 	eth/63,eth/64,eth/65
29	enode://e58a082778ba15d1170b6ae1fd87dd768dd6f160988fdab54c0d315420baabd4585e5d27b030af5746639761bcfe25b4c599059854b1eb771af9086501913493@54.248.44.237:43846	8b21453f7f4fb0345440be6660ac6f10109efaa2a5203470273395320dc99aec	inbound 	eth/63,eth/64,eth/65
30	enode://a21eaa48e2dfcb241413fc3f6c745c19f8f8276691d0540aec8b5a3765696ece7f956d6d43e760eb5bc14a7a0b40e96c9fcbf6a916f1b80283dd745ccd505570@79.214.207.3:55254	903c1965a3a7eb4e545775799a1e10d985c9f2b336c33773b4f58f0ca9e6ab6d	inbound 	eth/63,eth/64,eth/65
31	enode://0f2d47e125609a7385c721188ba9da8086c32fab348b14b32e328b176d70ef735fc377f8254cb814cb3fc2060e0cb7a12b7db04478a2941dd8f32606edf954ba@18.166.152.113:55010	9cf116bd94b8567ad006c83a78fa408c774fa51a483c0962c807513ec7836744	inbound 	eth/63,eth/64,eth/65
32	enode://740d991cbe8b3793b0ad4aef6b7aba7ddca6a1fac3acd1332bf4e63121b8d1076b95d6c5a6db8513f14dee541ac1d6bc28b29077fc4ea3e5e0c462c093b598dc@78.159.96.249:59816	a208270eaf448a43c31fb13b23f18ede0883cba1d20d549fd8f5a941a60fe01b	inbound 	eth/63,eth/64,eth/65
33	enode://40ceb2db099497e59d4b60722a054b7017802f7f17eed0317776013b5fd890897d4cb1b0e4752c59ede1c7a00f21b0325f7c850afaafedc815c8d23e1fcc0236@99.247.132.44:41754	a2be451c28a3a020ff13e8d754a406010070c9cfc1c6070d98784ccdd021e2cd	inbound 	eth/63,eth/64,eth/65
34	enode://49a0b3a28fe01135f1b2bf7ead149e718aaa5cd57651e53e102c73760e61ec63d97b20ec8cbb16f26bdc66955861aae0bba278b52f7027c19f9b01be78fbe75d@195.190.10.214:2657	aa1155df051681992737ab0e869ec969a7a9b30001c8c684b5534941b245b8ad	inbound 	eth/63,eth/64,eth/65
35	enode://4ae42cb0e5c0fdefbfe381c7ee634bb1423ebcbfd69b3c8d7e248e557ca948d6424ea89994ddccbdf5a69c38cc514de112fd1ee8a166f6e4e951c359214ec185@34.229.175.131:52396	abf82090b6fd04780dd50cd1ef950f7a0d969ca5fbe727eb6940e64d3b557953	inbound 	eth/63,eth/64
36	enode://8ab488a350ce015c745e895d9080c1cecf2bd5a3d89ec1c886a2481bd2f6c39f5440a1b2095b5153f7c891ef312e3ecfa2c84fc4d8f896a9277345b73d796680@103.117.147.2:43616	bb820631cbe5dbaae1d08c4376c33946c1075a4aeda1598487a8c900220b9f5f	inbound 	eth/63,eth/64,eth/65
37	enode://1b6ad36f47841b4113da5a3790388e38e7ea676ccdcbd62af7eca9828c09381e803d43452157d830b07a0e5b82e8de39f2f3cd98caa66b2b8b8fe17912026a2c@54.147.190.39:45042	bd0fa3ae23f844e9ee6197bbe4f32ee5b95fd0a191bad9d226a8c83b5a923d9c	inbound 	eth/63,eth/64
38	enode://d860a01f9722d78051619d1e2351aba3f43f943f6f00718d1b9baa4101932a1f5011f16bb2b1bb35db20d6fe28fa0bf09636d26a87d31de9ec6203eeedb1f666@18.138.108.67:30303	c845e51a5e470e445ad424f7cb516339237f469ad7b3c903221b5c49ce55863f	outbound	eth/64,eth/65,les/2,les/3,snap/1
39	enode://5cec4d8f606576120eb7cede35d4dda0eca4b4066a31ba35c23ecb600a0af8eb4e40140fe7989519045f685f31d1d6ccaa6662b69a7243d5c6170c646b06b3b4@209.151.156.184:30303	da36db1373e729e8c9a37525d829e36360a44ab01427a1a972e7ef64e106a2a4	outbound	eth/63,eth/64,les/2,les/3
40	enode://731bf244625572cb5818f47a5f2b16f176846a678c554e3bd245bd306291061ef71c27658887f2e9b49d218152018dcea13e30ea13a5d4fe793c3cb45b7c6587@135.181.142.187:42924	dc86ab04ef571976e7bb82c80054d638944061e46786f398ea0fc107a52024a0	inbound 	eth/63,eth/64,eth/65
41	enode://ddb53616f1299a02d8775ade5a075f1e262fa908d8bb1f27bccf72d9036d10549df4d0f1992ee1674ca084e8fa8c27605592f448cebb7326d0b2d5975d6fd78c@13.230.253.68:37014	ddc6f00af6d7fb600c79ac57f2f0881df65797b12992f62d4a48fc64cbdc1bc9	inbound 	eth/63,eth/64,eth/65
42	enode://3c90333270a53a1320a641d796416f529a1810ad1c745419bd889f723bc072c4a9180bdc659c99fe0c64a165fd5050bf194e3fc9ec04ba563bcd03945265e481@168.119.137.226:58136	debaa0fe74e14b3fe2f618b2ca0181a961ab03caff74d5d8347664782e5fae68	inbound 	eth/63,eth/64,eth/65
43	enode://ae3ca50f008a85e0b69f42420dc43e24914a0332549bd2b25a34d2a3976a0eaa15164a6bd0ca0502cabef65f643e69f7d0275d3026cf17d3ff1434630b4e1182@54.39.133.125:43710	e7a136d5dc92903803f2a93a51804466a07ac323212dcb860a7874cf4e99520a	inbound 	eth/63,eth/64,eth/65
44	enode://7ae1a181ffc8400d6ae2476971fb43b64d2eca7abdc6a010154271e7d105785e0054769f3dd622b494620707584080d83721747485aea33be50f65a4bf616956@116.202.210.165:53292	ea563231294e05faaf3326bf5dc9d4e8c590406060672903f4787746e5108b7e	inbound 	eth/63,eth/64
45	enode://95e829a80cf33aef2196feb7506dc48ed623ef6b4efd30687902aeed849a5599aecc9a5ef0fc33078e1bd1500f7ec5717828367996d8c2da3c3a9ec3b3e41689@51.81.93.168:30302	f21696a2165aab05b4a652e8d204ce32ef6fc4526ebf6560f6455ccb6faf68f1	outbound	eth/63,eth/64,eth/65
46	enode://f00e683a14031721b1d67a38f84c7dcbfc4dbbc9e9952e2506f8c659645d7147ea9929cdcc0ca7b6e20c17fe6f5463a052f3053d8e415c7d711e6c2e1e34d435@24.48.96.141:30303	f3218808da671914a021e36c6fa0bbd56c20786090db73d76f1e338a33aa96a3	outbound	eth/63,eth/64,eth/65
47	enode://01434d183610c37bd3b061fa963481fbd1d21498b24c0f692a2f068b6e9a7dbfdd8b400b4f26d02707a0763d026d92a0ec46f7f9a9ae50514577734cd60741e7@119.28.83.30:48504	f6668af2e041492a1a19adfc10217416eca00e8b6560427371253eb6eaa9c568	inbound 	eth/63,eth/64,eth/65
48	enode://5feca40a8ba6d76e7693a7c44967a5346a4d36aa08c678d4b021ad5f75a60145509425b6776b6c0f81cb5f40e0247238d5ba8a44777aef30ff7be00de1f9328e@52.3.224.8:30303	fe6f5bf341b066593c1289454e69ff19302c17f74efa12ac143c124395708421	outbound	eth/63,eth/64,eth/65
49	enode://38968299e11722e2f898bc1b9e794ffc31d95f047c566f6ae38f0451e09558b6aae88b6f14c73901c2903ba6927841b7b116824919cbf8014c3b3ebb8855f72c@54.255.189.69:36578	fead073c869783c92260b9238b52c999f1785c1714fd6040db681bbe973f74c1	inbound 	eth/63,eth/64,eth/65

eth/handler.go Outdated
if atomic.LoadUint32(&h.snapSync) == 1 && !peer.SupportsCap("snap", 1) {
// If we are running snap-sync, we want to reserve half the peer slots for
// peers supporting the snap protocol
reserved = h.maxPeers/2 - len(h.peers.snapPeers)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

peers needs a lock before you can access an internal field. We already have a Len for the eth peers, we can add a second one for the snap peers.

}
if reserved < 0 {
reserved = 0
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like it that the solution is short and sweet, but there's a cornercase we need to support. Currently the P2P layer splits the allowed peer count into 2, reserving half for inbound and half for outbound connections. If my node is NATed, I won't get any inbound, so I'll use up 1/2 of my slots outbound. The issue is that I'm able to saturate my outbound connections fully with eth only peers without triggering the above snap reservation.

One solution is that we reserve half of the inbound and half of the outbound connections separately for snap. That would solve the above mentioned issue, but it can still be a bit flaky if I only have 1/4 peer.

Another alternative could be to enforce the restriction based on the currently connection peer count. E.g. if peers.LenEth() > min(2 * peers.LenSnap(), 5), then reject. That would guarantee that we can always get some baseline number of eth peers, but above that, we're enforicng half and half.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not quite sure I understand... Do you mean like this?

		snapPeers := h.peers.SnapLen()
		ethPeers := h.peers.Len() - snapPeers // snap-peers are a subset of eth-peers
		if ethPeers > 5 && ethPeers > 2 *snapPeers{
			reject = true
		}

In that case, it would always accept up to 5 non-snap peers, but would otherwise allow twice as many non-snap as snap.

Alternatively;

		snapPeers := h.peers.SnapLen()
		ethPeers := h.peers.Len() - snapPeers // snap-peers are a subset of eth-peers
		if ethPeers > 5 && ethPeers > snapPeers{
			reject = true
		}

In that case, it would always accept up to 5 non-snap peers, but would otherwise allow only as many non-snap as snap.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess a different method would be to "reject if non-snap peers are more than 5+snap-peers".

As it is currently (83a9567), going from 5 to you need 3 snap peers to join. before you allow the 6th non-snap peer.
The alternative would only require 1 snap-peer before the 6th is allowed in. And eventually, with 50 peers, you'll have something like 23 snap and 27 non-snap peers.

@holiman
Copy link
Contributor Author

holiman commented Jan 25, 2021

I made an implementation of what I think you meant, PTAL

Copy link
Member

@karalabe karalabe left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants