-
Notifications
You must be signed in to change notification settings - Fork 164
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Testing and Integrating new qemu_thread_set_affinity definition #4212
Conversation
The most critical use case for the new CPU pinning is realtime, as this implementation does not really pin CPUs. I would switch to it if it does not worsen the results of RT tests. @rouming can provide more information, how he performed the testing. Also, if using the native qemu implementation, the qemu parameters should be used in native way, as well. You can find it in the PR I shared with you earlier: rucoder#1 |
@rouming Could you please provide more info on RT CPU Pinning tests? |
The RT suite is here: https://git.kernel.org/pub/scm/utils/rt-tests/rt-tests.git/ , the standard de facto is cyclictest, you can search how to use it if you wish, but I can tell in advance, that pinning the whole process with all its thread to set of cpus comparing to 1to1 mapping has no visible impact on any test results, just because scheduler is smart enough and wont perform a task migration to another cpu if load on all vcpus is equal (usually this is the case). even if task migrates, that does not happen frequently, so impact is misarable. |
It would be nice to remove the custom patches in this case. But I don't see how the QEMU vanilla pinning is used here? To run with the native CPU pinning, QEMU should start with |
@roja-zededa I see this PR contains the commit message from another PR "Upgraded Xen-tools from 4.15 to 4.19.0", not sure what is the purpose of duplicating these if this is not a mistake. |
cd04e97
to
4732410
Compare
@rouming No, it wasn't a duplicate. Upgrading xentool version to 4.19.0 introduced a new vanilla qemu pinning feature from QEMU 8.0, we're testing here if we need to keep the vanilla pinning feature or go with Nikolay's 1-1 custom pinning patch which is already included in the previous xentools versions. |
Now I see you pushed a commit which corresponds to the PR description, nice. Can you please share what are the results of testing and how exactly do you test two different implementations. |
25221aa
to
4c26806
Compare
4c26806
to
7dd83ea
Compare
1d5f3ad
to
91de957
Compare
I would just cherry-pick the original commit and save the commit message.
|
60a59ac
to
407735c
Compare
@roja-zededa, any progress on manual testing? |
|
I would make it as a new patch without changing patch 15. |
@OhmSpectator The board I have right now doesn't have 10 cores. Would it be okay to manually perform the test on a four-core DellOptiplex? If that doesn't work we can directly kickoff the new cpupinning test desgined by vignesh. |
You can also run EVE in QEMU with a given amount of VCPUs =) |
227a75a
to
0a66a33
Compare
Update pillar's handling of QEMU guests to take advantage of native CPU pinning options introduced in the newer version of QEMU (8.0.4). This eliminates the need for our custom patches to QEMU for CPU pinning. The following changes are included: Removing custom qemu cpu-pinning patches to include the native qemu-pinning feature (1) Refactoring CPU pinning to use integer slices instead of comma-separated strings (2) Updating QEMU command-line arguments to include native CPU pinning options (3) Ensuring compatibility with both KVM and Xen hypervisors (4) This change allows us to leverage upstream QEMU improvements and simplifies the codebase by removing custom patches and complex string manipulations. Tested with QEMU version 8.0.4 Signed-off-by: Roja Eswaran <roja@zededa.com>
0a66a33
to
90db3a3
Compare
@OhmSpectator Tested this PR with tiny VM (1vcpu). Here are the results. Added a VM (no pin), 3 more VMs with pinning enabled.
79e8a31a-712c-4d1a-be9d-1f41a6da1300:~# for pid in $(pgrep qemu) ; do echo "======= QEMU
79e8a31a-712c-4d1a-be9d-1f41a6da1300:~# for pid in $(pgrep qemu) ; do echo "======= QEMU
======= QEMU 578 threads: ======= Removing Pinned VMs one-by-one ======= QEMU 1669 threads: ======= ======= QEMU 1669 threads: ======= 79e8a31a-712c-4d1a-be9d-1f41a6da1300:~# for pid in $(pgrep qemu) ; do echo "======= QEMU
|
@roja-zededa , in the commit title "pillar: Adopt pillar to use native cpupinning", I guess you meant: "pillar: Adapt pillar to use..." |
@@ -242,7 +242,7 @@ type VmConfig struct { | |||
ExtraArgs string // added to bootargs | |||
BootLoader string // default "" | |||
// For CPU pinning | |||
CPUs string // default "", list of "1,2" | |||
CPUs []int // default nil, list of [1,2] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@roja-zededa , this change implies on change the API as well: https://github.com/lf-edge/eve-api/blob/main/proto/config/vm.proto#L48
I'm not sure how this impacts the controller.... I wouldn't change the type, you can still keep as string and parse it accordingly....
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1 changes on API would require changes on controller
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nikolay authored this commit. @OhmSpectator Could you please take a look at it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cmd/zedagent/parseconfig.go parses the received protobuf and produces the golang structs such as this. Thus it can be made to parse the existing protbuf string format and produce an array of integers.
Thus no need to change the EVE API.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This field in the protobuf that @rene pointed to is not filled out by the controller at the moment. So we don't even have to change the parsing logic. It's currently used only internally in the DomainStatus.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would also add test to domainmgr checking that CPUs were set as expected in VmConfig.CPUs (something like this) and that generated hypervisor configs are as expected
This looks strange as I would expect
But maybe the CPUs were redistributed a little bit later. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@roja-zededa , thanks for the tests. The results look fine. Nevertheless, I would prefer to run the tests as they are described in the doc, as they also involve some corner cases.
Also, could you please run this snippet to check the affinities (should be run from the debug container with the procps
package installed):
for pid in $(pgrep qemu); do
echo "======= QEMU $pid threads: =======";
for spid in $(ps -T -o spid= -p "$pid"); do
echo -n "Thread $spid affinity: "; taskset -pc "$spid";
cgpath=$(grep -m1 cpuset /proc/$spid/cgroup | cut -d: -f3)
echo "Cgroup cpuset: $(cat /sys/fs/cgroup/cpuset${cgpath}/cpuset.cpus)"
done;
done
I want to see how it will look like for the non-VCPU threads, as they are now not affected by the CPU pinning:
======= QEMU 1669 threads: =======
pid 1669's current affinity list: 2
pid 1682's current affinity list: 2
pid 1683's current affinity list: 2
pid 1688's current affinity list: 2
pid 1689's current affinity list: 2
pid 2258's current affinity list: 0-3 <---- here
pid 4727's current affinity list: 2
pid 4728's current affinity list: 2
pid 4729's current affinity list: 2
pid 4762's current affinity list: 0-3 <---- here
I also want to understand, why most of the threads have affinity corresponding to the CPU mask and why not others... I expected VCPU thread to affected only, but we definetly see more. If you could, would be nice if you check how the new version of QEMU handles this option:
-object thread-context,id=tc1,...
may be we should do the same for tc0
, or only for tc0
?
The deleted cpus are being reclaimed. Here is the trace. ======= QEMU 22745 threads: ======= Removal: Deletea VM ======= QEMU 22745 threads: ======= |
@OhmSpectator @rene Covered all the test cases in this document. The results look good to me. https://docs.google.com/document/d/1qoCq7SPk6UR2LCb6APVbmLLgo7W7wONJUncTrXCbPeM/edit?tab=t.0#heading=h.p1hte28jbxjx Please let me know if you have more questions. |
@roja-zededa, please address the rest of my comments here:
|
@OhmSpectator Updated this doc:https://docs.google.com/document/d/1qoCq7SPk6UR2LCb6APVbmLLgo7W7wONJUncTrXCbPeM/edit?tab=t.0#heading=h.p1hte28jbxjx with the CPUset group info and CPU Affinity. Please refer to it. I don't see any issue with it so far. |
@OhmSpectator @rene @eriknordmark Please kickoff the tests. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Re-run tests
@roja-zededa, thanks for the testsing! Interestingly, we enforce pinning more by setting CPUSet in a cgroup now rather than by setting affinity in QEMU. But let it be. |
Xentools 4.19.0 which uses QEMU 8.0.4 has new implementation for CPU pinning. A big difference between our implementation and the QEMU vanilla version is that our implementation can pin vCPU-to-pCPU 1-to-1, while the QEMU implementation allows only a set of CPUs to be assigned to an entire VM (and the VM's vCPU threads can still migrate among the given set of pCPUs). Need to verify and intergrate if the new qemu_thread_set_affinity definition is suitable for our use case