Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

vdpa/virtio: replace doorbell relay with notifier #52

Merged
merged 2 commits into from
Jan 3, 2024

Conversation

kailiangz1
Copy link
Collaborator

Doorbell relay is used before, but it is time consuming. Use notifier instead and use DPDK work to do notify because we can't wait when we process vhost msg, otherwise, deadlock will happend, so, use work thread to finish the job

DRV_LOG(ERR, "%s virtq doorbell relay failed ret:%d",
priv->vdev->device->name, ret);
return ret;
if (priv->configured){
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

better add a space between ) and {

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

priv->vdev->device->name, ret);
return ret;
if (priv->configured){
virtio_vdpa_lcore_id = rte_get_next_lcore(virtio_vdpa_lcore_id, 1, 1);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We get this lcore_id using rte_get_next_lcore every time. So do we need to define it as a global var? maybe temp var?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we use global to prevent use same core next time

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok

if (priv->configured){
virtio_vdpa_lcore_id = rte_get_next_lcore(virtio_vdpa_lcore_id, 1, 1);
notify_work = rte_zmalloc(NULL, sizeof(*notify_work), 0);
if(!notify_work) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

better add a space after if

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

virtio_vdpa_lcore_id = rte_get_next_lcore(virtio_vdpa_lcore_id, 1, 1);
notify_work = rte_zmalloc(NULL, sizeof(*notify_work), 0);
if(!notify_work) {
DRV_LOG(ERR, "%s vfid %d failed request memory", priv->vdev->device->name, priv->vf_id);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe "failed to alloc notify work"?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

@@ -964,6 +829,24 @@ virtio_vdpa_dev_close_work(void *arg)
priv->dev_work_flag = VIRTIO_VDPA_DEV_CLOSE_WORK_DONE;
return ret;
}
static int
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe add a blank line before this

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

virtio_vdpa_lcore_id = rte_get_next_lcore(virtio_vdpa_lcore_id, 1, 1);
notify_work = rte_zmalloc(NULL, sizeof(*notify_work), 0);
if(!notify_work) {
DRV_LOG(ERR, "%s vfid %d failed request memory", vdev->device->name, priv->vf_id);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

similar comment as above

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

@@ -1233,6 +1117,23 @@ virtio_vdpa_dev_config(int vid)
}
priv->lm_status = VIRTIO_S_RUNNING;

virtio_vdpa_lcore_id = rte_get_next_lcore(virtio_vdpa_lcore_id, 1, 1);
notify_work = rte_zmalloc(NULL, sizeof(*notify_work), 0);
if(!notify_work) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

space after if

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

DRV_LOG(INFO, "%s vfid %d launch notifier work lcore:%d", vdev->device->name, priv->vf_id, virtio_vdpa_lcore_id);
ret = rte_eal_remote_launch(virtio_vdpa_dev_notifier_work, notify_work, virtio_vdpa_lcore_id);
if (ret) {
DRV_LOG(ERR, "%s vfid %d failed launch notifier work ret:%d lcore:%d", vdev->device->name, priv->vf_id, ret, virtio_vdpa_lcore_id);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

line 1130 and 1133 seems too long. maybe using two lines?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

@Ch3n60x
Copy link
Collaborator

Ch3n60x commented Dec 19, 2023

Did some test. result shows before this commit, set_vring_state could take more than 0.01s for some queue. But after this change, all queues set_vring_state should be below 0.0001s.

Call fd is -1 when pre-config, in order to prevent 2nd restore
modify queue info which takes long time, modify queue's msix
to default value. But dpdk case will different, 2nd restoer time
will longger than kernel

Signed-off-by: Kailiang <kailiangz@nvidia.com>
Doorbell relay is used before, but it is time consuming.
Use notifier instead and use DPDK work to do notify because
we can't wait when we process vhost msg, otherwise, deadlock
will happend, so, use work thread to finish the job

Signed-off-by: Kailiang <kailiangz@nvidia.com>
@kailiangz1 kailiangz1 merged commit a8309bd into Mellanox:main Jan 3, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
3 participants