-
Notifications
You must be signed in to change notification settings - Fork 18
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
GRPC Error: 14 UNAVAILABLE: TCP Write failed #31
Comments
Update: It seems that this problem related to the container: it will only happen when the hostie server runs inside the Docker. (the wechaty-puppet-hostie is always inside a Linux box of the above testings) |
Some same error logs related to this issue. Project route: wechaty@0.39.14 -> hostie -> donut@0.3.11 -> windows server(wechaty@0.39.14)
|
Log
Key Info
Every The WeChat login already, but still emit the old QR code and |
It seems there have one issue that we can fix now: If we do not have any valid QRcode, then we should not emit the |
Shall we add some check around the |
I agree with you that it would be better to check more when we get the However, the Please let me know if you have any way to recover the system from unknown errors, thanks! |
I would assume most errors we encountered recently do not require a system recover, the system is not down. So it would be better to use |
Maybe we need to refer to this comments and do a retry logic for our grpc connection. |
Thanks for the link. I'd like to accept a PR that adds retry logic to the GRPC service. However, I feel that there are other reasons that caused the |
I agree. There should be some weird network issue when we using containers. To resolve this issue, I think we have two ways:
I would prefer to do both, but since I don't have any insights related to the second one, I will do the first one first :P |
I finally got a reproduce code: import { Contact, Wechaty, Room } from 'wechaty';
const bot = new Wechaty({ puppet: 'wechaty-puppet-hostie', puppetOptions: {
token: 'a-very-secret-token-that-no-one-knows',
}});
const syncContact = async (contacts: Contact[]) => {
await Promise.all(contacts.map(c => c.sync()));
}
const syncRoom = async (rooms: Room[]) => {
await Promise.all(rooms.map(async r => {
await r.sync();
const members = await r.memberAll();
await Promise.all(members.map(m => m.sync()));
}))
}
bot.on('login', async user => {
console.log(`Login: ${user}`);
setInterval(async () => {
const contacts = await bot.Contact.findAll();
const rooms = await bot.Room.findAll();
void syncContact(contacts);
void syncRoom(rooms);
}, 5000);
}).start(); This will trigger the |
I found the lower layer error happened in TCP layer read: error={"created":"@1594209127.281811000","description":"EOF","file":"../deps/grpc/src/core/lib/iomgr/tcp_uv.cc","file_line":106}
... // A lot of write binary data log
write complete on 0x10271cf00: error="No Error"
TCP 0x10271cf00 shutdown why={"created":"@1594209127.283200000","description":"Delayed close due to in-progress write","file":"../deps/grpc/src/core/ext/transport/chttp2/transport/chttp2_transport.cc","file_line":593,"referenced_errors":[{"created":"@1594209127.281902000","description":"Endpoint read failed","file":"../deps/grpc/src/core/ext/transport/chttp2/transport/chttp2_transport.cc","file_line":2559,"grpc_status":14,"occurred_during_write":0,"referenced_errors":[{"created":"@1594209127.281811000","description":"EOF","file":"../deps/grpc/src/core/lib/iomgr/tcp_uv.cc","file_line":106}]}]}
I0708 19:52:07.339226000 4344593856 tcp_client_custom.cc:151] CLIENT_CONNECT: 0x102a2c940 ipv4:68.79.51.17:8803: asynchronously connecting To catch this error, we could use GRPC_VERBOSITY=info GRPC_TRACE=tcp ts-node test.ts to run the test code above. Then you will probably see the error above. |
From this blog post
So the problem is probably the server side close the connection first. I will try to dive more into the service side logs. |
Here is the log in the hostie server when the connection dropped: tcp_custom.cc:218] write complete on 029FCC20: error={"created":"@1594210168.896000000","description":"TCP Write failed","file":"d:\a\grpc-node\grpc-node\packages\grpc-native-core\deps\grpc\src\core\lib\iomgr\tcp_uv.cc","file_line":72,"grpc_status":14,"os_error":"message too long"} There are multiple |
The Then I found some article related to docker When we use docker to start donut, there are multiple layers, I think some layer within the stack might have different |
Linked StackOverFlow Thanks for murgatroid99's comments, another keyword
Searching with keyword Will investigate more. |
@huan Any progress on your side? |
Unfortunately, I have not started yet because I have to finish some other tasks before I can enjoy debugging this MONSTER. Thank you very much for your patient, I will let you know when I head to it soon! |
Got it! |
Maybe we should have a look at See:
The |
Have you tested these configurations? What's the results? Does this solve the problem in the issue? |
Update: We start using Let's keep eye on this issue when we have upgraded our puppet service server with the wechaty v0.57 or above. |
@huan With an application I'm working on I'm seeing the same GRPC error. It uses |
Hi @MauritsR , We have not seen this issue after we switch to |
When the puppet hostie are calling hundreds of GRPC methods parallelly from the puppet service, it seems that there's a very high possibility to happen the following error:
Links
The text was updated successfully, but these errors were encountered: