Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DMA map optimization #49

Merged
merged 2 commits into from
Dec 19, 2023
Merged

DMA map optimization #49

merged 2 commits into from
Dec 19, 2023

Conversation

Ch3n60x
Copy link
Collaborator

@Ch3n60x Ch3n60x commented Nov 29, 2023

This PR optimizes DMA mapping

@@ -702,7 +699,7 @@ virtio_vdpa_find_mem_reg(const struct rte_vhost_mem_region *key, const struct rt

for (i = 0; i < mem->nregions; i++) {
reg = &mem->regions[i];
if ((reg->host_user_addr == key->host_user_addr) &&
if ((reg->guest_user_addr == key->guest_user_addr) &&
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why change this address?
any special reason?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In vhost lib, two vdpa devices will map the same memory fd twice. so for same physical memory, it will be different host_user_addr for the two devices:

VIRTIO VDPA virtio_vdpa_dev_set_mem_table(): device 0 region 0: HVA 0x7f7680000000, GPA 0x0, size 0xc0000000.

VIRTIO VDPA virtio_vdpa_dev_set_mem_table(): device 0 region 1: HVA 0x7f7540000000, GPA 0x100000000, size 0x140000000.

VIRTIO VDPA virtio_vdpa_dev_set_mem_table(): device 1 region 0: HVA 0x7f73c0000000, GPA 0x0, size 0xc0000000.

VIRTIO VDPA virtio_vdpa_dev_set_mem_table(): device 1 region 1: HVA 0x7f7280000000, GPA 0x100000000, size 0x140000000.

So we can't use HVA to know if we already dma_map the physical page(s). QEMU virtual address (guest_user_address) could be an option because for the same physical page, qemu va will be the same. And vhost lib is also using this logic to check old/new memory region are the same or not (in function vhost_memory_changed()).

@Ch3n60x Ch3n60x force-pushed the dma_map_opt branch 4 times, most recently from 6141852 to 9b0de0d Compare December 15, 2023 02:55
@yajwu
Copy link
Collaborator

yajwu commented Dec 18, 2023

For VF query:
python sw/dpdk/app/vfe-vdpa/vhostmgmt vf -l 0000:af:00.2
{'mgmtpf': '0000:af:00.2', 'list': True}

{
"devices": [
{
"vf": "0000:af:04.5",
"socket_file": "/tmp/vfe-net0",
"msix_num": "60",
"mtu": "0",
"queue_size": "1024",
"mac": "00:00:00:00:00:00",
"queue_num": "62",
"device_feature": {
" 22": "VIRTIO_NET_F_MQ",
" 12": "VIRTIO_NET_F_HOST_TSO6",
" 11": "VIRTIO_NET_F_HOST_TSO4",
" 40": "VIRTIO_F_RING_RESET",
" 32": "VIRTIO_F_VERSION_1",
" 33": "VIRTIO_F_IOMMU_PLATFORM",
" 0": "VIRTIO_NET_F_CSUM",
"value": "0x10300401801"
}
},

add uuid?

@yajwu
Copy link
Collaborator

yajwu commented Dec 18, 2023

Pass basic function test, looks good to me.

@Ch3n60x
Copy link
Collaborator Author

Ch3n60x commented Dec 19, 2023

For VF query: python sw/dpdk/app/vfe-vdpa/vhostmgmt vf -l 0000:af:00.2 {'mgmtpf': '0000:af:00.2', 'list': True}

{ "devices": [ { "vf": "0000:af:04.5", "socket_file": "/tmp/vfe-net0", "msix_num": "60", "mtu": "0", "queue_size": "1024", "mac": "00:00:00:00:00:00", "queue_num": "62", "device_feature": { " 22": "VIRTIO_NET_F_MQ", " 12": "VIRTIO_NET_F_HOST_TSO6", " 11": "VIRTIO_NET_F_HOST_TSO4", " 40": "VIRTIO_F_RING_RESET", " 32": "VIRTIO_F_VERSION_1", " 33": "VIRTIO_F_IOMMU_PLATFORM", " 0": "VIRTIO_NET_F_CSUM", "value": "0x10300401801" } },

add uuid?

Done

This commit adds a new devargs named vm_uuid to virtio vdpa driver.
This UUID is a unique ID of VM, most likely the UUID of libvirt VM.

To specify a UUID, user could use '-a 0000:af:00.1 -u $UUID' to let driver
know the VM UUID of one virtio VF. This parameter is useful for later
optimization of DMA mapping.

Signed-off-by: Chenbo Xia <chenbox@nvidia.com>
Before this commit, every virtio VF has its own VFIO container and
manages its own DMA mapping. But when multiple VFs belong to one
VM, a single DMA mapping will be set up multiple times, which is
time-consuming.

This commit adds the concept of IOMMU domain in virtio vdpa driver,
one VM owns one IOMMU domain and VFs that belong to one VM should be
in the same IOMMU domain. With this concept, DMA mapping is also
optimized when multiple VFs belong to one VM. Instead of setting up
DMA mapping multiple times, only single time is needed. This
optimization will save much time for DMA mapping.

Signed-off-by: Chenbo Xia <chenbox@nvidia.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
3 participants