Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

app/vfe-vdpa: add wait time after socket start #81

Merged
merged 1 commit into from
Apr 30, 2024

Conversation

Ch3n60x
Copy link
Collaborator

@Ch3n60x Ch3n60x commented Apr 30, 2024

When multiple VFs belong to single VM, QEMU handles the vhost-user message resend of devices one by one and the vhost fds close happen really quickly. This leads to downtime increase for VFs that restore late, as they need to wait for VFs driver ok on QEMU side. This commit fixes this issue by adding wait time per-VF.

@@ -267,9 +272,21 @@ virtio_ha_client_dev_restore_vf(int total_vf)
}
pthread_mutex_unlock(&vf_restore_lock);
total_vf--;
usleep(MIN_WAIT_TIME);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if vhost fd = -1, we should not wait

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

MIN_WAIT_TIME seems not necessary

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

When multiple VFs belong to single VM, QEMU handles the vhost-user
message resend of devices one by one and the vhost fds close happen
really quickly. This leads to downtime increase for VFs that restore
late, as they need to wait for VFs driver ok on QEMU side. This commit
fixes this issue by adding wait time per-VF.

Signed-off-by: Chenbo Xia <chenbox@nvidia.com>
@kailiangz1 kailiangz1 merged commit b3f3b71 into Mellanox:main Apr 30, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
2 participants