Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Testing and Integrating new qemu_thread_set_affinity definition #4212

Merged
merged 1 commit into from
Nov 6, 2024

Conversation

roja-zededa
Copy link
Contributor

@roja-zededa roja-zededa commented Sep 4, 2024

Xentools 4.19.0 which uses QEMU 8.0.4 has new implementation for CPU pinning. A big difference between our implementation and the QEMU vanilla version is that our implementation can pin vCPU-to-pCPU 1-to-1, while the QEMU implementation allows only a set of CPUs to be assigned to an entire VM (and the VM's vCPU threads can still migrate among the given set of pCPUs). Need to verify and intergrate if the new qemu_thread_set_affinity definition is suitable for our use case

@roja-zededa roja-zededa changed the title Testing and Integrating new qemu_thread_set_affinity definition from xentools-4.19.0 QEMU 8.0.4 Testing and Integrating new qemu_thread_set_affinity definition Sep 4, 2024
@roja-zededa roja-zededa marked this pull request as draft September 4, 2024 18:04
@OhmSpectator
Copy link
Member

OhmSpectator commented Sep 4, 2024

The most critical use case for the new CPU pinning is realtime, as this implementation does not really pin CPUs. I would switch to it if it does not worsen the results of RT tests. @rouming can provide more information, how he performed the testing.

Also, if using the native qemu implementation, the qemu parameters should be used in native way, as well. You can find it in the PR I shared with you earlier: rucoder#1

@roja-zededa
Copy link
Contributor Author

The most critical use case for the new CPU pinning is realtime, as this implementation does not really pin CPUs. I would switch to it if it does not worsen the results of RT tests. @rouming can provide more information, how he performed the testing.

Also, if using the native qemu implementation, the qemu parameters should be used in native way, as well. You can find it in the PR I shared with you earlier: rucoder#1

@rouming Could you please provide more info on RT CPU Pinning tests?

@rouming
Copy link
Contributor

rouming commented Sep 16, 2024

The most critical use case for the new CPU pinning is realtime, as this implementation does not really pin CPUs. I would switch to it if it does not worsen the results of RT tests. @rouming can provide more information, how he performed the testing.
Also, if using the native qemu implementation, the qemu parameters should be used in native way, as well. You can find it in the PR I shared with you earlier: rucoder#1

@rouming Could you please provide more info on RT CPU Pinning tests?

The RT suite is here: https://git.kernel.org/pub/scm/utils/rt-tests/rt-tests.git/ , the standard de facto is cyclictest, you can search how to use it if you wish, but I can tell in advance, that pinning the whole process with all its thread to set of cpus comparing to 1to1 mapping has no visible impact on any test results, just because scheduler is smart enough and wont perform a task migration to another cpu if load on all vcpus is equal (usually this is the case). even if task migrates, that does not happen frequently, so impact is misarable.

@OhmSpectator
Copy link
Member

It would be nice to remove the custom patches in this case.

But I don't see how the QEMU vanilla pinning is used here? To run with the native CPU pinning, QEMU should start with -object thread-context,id=tc1,cpu-affinity=0-1,cpu-affinity=6-7 option or similar...

@rouming
Copy link
Contributor

rouming commented Sep 17, 2024

@roja-zededa I see this PR contains the commit message from another PR "Upgraded Xen-tools from 4.15 to 4.19.0", not sure what is the purpose of duplicating these if this is not a mistake.

@roja-zededa
Copy link
Contributor Author

roja-zededa commented Sep 17, 2024

@roja-zededa I see this PR contains the commit message from another PR "Upgraded Xen-tools from 4.15 to 4.19.0", not sure what is the purpose of duplicating these if this is not a mistake.

@rouming No, it wasn't a duplicate. Upgrading xentool version to 4.19.0 introduced a new vanilla qemu pinning feature from QEMU 8.0, we're testing here if we need to keep the vanilla pinning feature or go with Nikolay's 1-1 custom pinning patch which is already included in the previous xentools versions.

@rouming
Copy link
Contributor

rouming commented Sep 18, 2024

@roja-zededa I see this PR contains the commit message from another PR "Upgraded Xen-tools from 4.15 to 4.19.0", not sure what is the purpose of duplicating these if this is not a mistake.

@rouming No, it wasn't a duplicate. Upgrading xentool version to 4.19.0 introduced a new vanilla qemu pinning feature from QEMU 8.0, we're testing here if we need to keep the vanilla pinning feature or go with Nikolay's 1-1 custom pinning patch which is already included in the previous xentools versions.

Now I see you pushed a commit which corresponds to the PR description, nice.

Can you please share what are the results of testing and how exactly do you test two different implementations.

@roja-zededa roja-zededa force-pushed the new-cpupinning branch 2 times, most recently from 25221aa to 4c26806 Compare September 25, 2024 17:20
@roja-zededa roja-zededa closed this Oct 1, 2024
@roja-zededa roja-zededa reopened this Oct 1, 2024
@roja-zededa roja-zededa force-pushed the new-cpupinning branch 2 times, most recently from 1d5f3ad to 91de957 Compare October 1, 2024 20:27
@OhmSpectator
Copy link
Member

I would just cherry-pick the original commit and save the commit message.

git remote add ohmspectator git@github.com:OhmSpectator/eve.git
git fetch ohmspectator
git cherry-pick 32ec62b016016792d2c07512428f2e50559e7c34

@OhmSpectator
Copy link
Member

@roja-zededa, any progress on manual testing?

@roja-zededa
Copy link
Contributor Author

qemu_thread_set_affinity(cpu->thread, NULL, cpu->cpumask); definition has been changed in the newer version. As of now, passing NULL for CPUlist but that might cause some issues. Need to change patch 15 to accommodate the new definition.

@OhmSpectator
Copy link
Member

qemu_thread_set_affinity(cpu->thread, NULL, cpu->cpumask); definition has been changed in the newer version. As of now, passing NULL for CPUlist but that might cause some issues. Need to change patch 15 to accommodate the new definition.

I would make it as a new patch without changing patch 15.

@roja-zededa roja-zededa marked this pull request as draft October 16, 2024 17:29
@roja-zededa
Copy link
Contributor Author

@OhmSpectator The board I have right now doesn't have 10 cores. Would it be okay to manually perform the test on a four-core DellOptiplex? If that doesn't work we can directly kickoff the new cpupinning test desgined by vignesh.

@OhmSpectator
Copy link
Member

You can also run EVE in QEMU with a given amount of VCPUs =)

@roja-zededa roja-zededa force-pushed the new-cpupinning branch 4 times, most recently from 227a75a to 0a66a33 Compare October 22, 2024 19:37
Update pillar's handling of QEMU guests to take advantage of native CPU pinning
options introduced in the newer version of QEMU (8.0.4). This eliminates the need
for our custom patches to QEMU for CPU pinning. The following changes are included:
Removing custom qemu cpu-pinning patches to include the native qemu-pinning feature (1)
Refactoring CPU pinning to use integer slices instead of comma-separated strings (2)
Updating QEMU command-line arguments to include native CPU pinning options (3)
Ensuring compatibility with both KVM and Xen hypervisors (4)

This change allows us to leverage upstream QEMU improvements and simplifies
the codebase by removing custom patches and complex string manipulations. Tested with QEMU version 8.0.4

Signed-off-by: Roja Eswaran <roja@zededa.com>
@roja-zededa
Copy link
Contributor Author

@OhmSpectator Tested this PR with tiny VM (1vcpu). Here are the results. Added a VM (no pin), 3 more VMs with pinning enabled.
79e8a31a-712c-4d1a-be9d-1f41a6da1300:~# for pid in $(pgrep qemu) ; do echo "======= QEMU $pid threads: =======" ; for spid in $(ps -T -o spid= -p "$pid" )
; do taskset -pc "$spid" ; done ;

done
======= QEMU 32076 threads: =======
pid 32076's current affinity list: 0-3
pid 32094's current affinity list: 0-3
pid 32098's current affinity list: 0-3
pid 32099's current affinity list: 0-3
pid 32101's current affinity list: 0-3
pid 32142's current affinity list: 0-3
pid 32211's current affinity list: 0-3
pid 32444's current affinity list: 0-3
pid 32445's current affinity list: 0-3
pid 32446's current affinity list: 0-3
pid 32447's current affinity list: 0-3
pid 32448's current affinity list: 0-3
pid 32449's current affinity list: 0-3


79e8a31a-712c-4d1a-be9d-1f41a6da1300:~# for pid in $(pgrep qemu) ; do echo "======= QEMU $pid threads: =======" ; for spid in $(ps -T -o spid= -p "$pid" )
; do taskset -pc "$spid" ; done ;

done
======= QEMU 578 threads: =======
pid 578's current affinity list: 1
pid 591's current affinity list: 1
pid 592's current affinity list: 1
pid 596's current affinity list: 1
pid 597's current affinity list: 1
pid 599's current affinity list: 1
======= QEMU 32076 threads: =======
pid 32076's current affinity list: 0,2,3
pid 32094's current affinity list: 0,2,3
pid 32098's current affinity list: 0,2,3
pid 32099's current affinity list: 0,2,3
pid 344's current affinity list: 0,2,3


79e8a31a-712c-4d1a-be9d-1f41a6da1300:~# for pid in $(pgrep qemu) ; do echo "======= QEMU $pid threads: =======" ; for spid in $(ps -T -o spid= -p "$pid" )
; do taskset -pc "$spid" ; done ;

done
======= QEMU 578 threads: =======
pid 578's current affinity list: 1
pid 591's current affinity list: 1
pid 592's current affinity list: 1
pid 596's current affinity list: 1
pid 597's current affinity list: 1
pid 979's current affinity list: 0-3
pid 1954's current affinity list: 1
======= QEMU 1669 threads: =======
pid 1669's current affinity list: 2
pid 1682's current affinity list: 2
pid 1683's current affinity list: 2
pid 1688's current affinity list: 2
pid 1689's current affinity list: 2
pid 1702's current affinity list: 2
pid 1813's current affinity list: 0-3
pid 1814's current affinity list: 2
pid 1815's current affinity list: 2
pid 1816's current affinity list: 2
pid 1817's current affinity list: 2
======= QEMU 32076 threads: =======
pid 32076's current affinity list: 0,3
pid 32094's current affinity list: 0,3
pid 32098's current affinity list: 0,3
pid 32099's current affinity list: 0,3
pid 1163's current affinity list: 0,3


======= QEMU 578 threads: =======
pid 578's current affinity list: 1
pid 591's current affinity list: 1
pid 592's current affinity list: 1
pid 596's current affinity list: 1
pid 597's current affinity list: 1
pid 979's current affinity list: 0-3
pid 3507's current affinity list: 1
======= QEMU 1669 threads: =======
pid 1669's current affinity list: 2
pid 1682's current affinity list: 2
pid 1683's current affinity list: 2
pid 1688's current affinity list: 2
pid 1689's current affinity list: 2
pid 2258's current affinity list: 0-3
pid 3506's current affinity list: 2
======= QEMU 3163 threads: =======
pid 3163's current affinity list: 3
pid 3178's current affinity list: 3
pid 3185's current affinity list: 3
pid 3193's current affinity list: 3
pid 3194's current affinity list: 3
pid 3201's current affinity list: 3
pid 3251's current affinity list: 0-3
pid 3503's current affinity list: 0-3
pid 3504's current affinity list: 0-3
pid 3505's current affinity list: 0-3
======= QEMU 32076 threads: =======
pid 32076's current affinity list: 0
pid 32094's current affinity list: 0
pid 32098's current affinity list: 0
pid 32099's current affinity list: 0
pid 2881's current affinity list: 0


Removing Pinned VMs one-by-one


======= QEMU 1669 threads: =======
pid 1669's current affinity list: 2
pid 1682's current affinity list: 2
pid 1683's current affinity list: 2
pid 1688's current affinity list: 2
pid 1689's current affinity list: 2
pid 2258's current affinity list: 0-3
pid 4727's current affinity list: 2
pid 4728's current affinity list: 2
pid 4729's current affinity list: 2
pid 4762's current affinity list: 0-3
======= QEMU 3163 threads: =======
pid 3163's current affinity list: 3
pid 3178's current affinity list: 3
pid 3185's current affinity list: 3
pid 3193's current affinity list: 3
pid 3194's current affinity list: 3
pid 3593's current affinity list: 0-3
======= QEMU 32076 threads: =======
pid 32076's current affinity list: 0,1
pid 32094's current affinity list: 0,1
pid 32098's current affinity list: 0,1
pid 32099's current affinity list: 0,1
pid 2881's current affinity list: 0,1


======= QEMU 1669 threads: =======
pid 1669's current affinity list: 2
pid 1682's current affinity list: 2
pid 1683's current affinity list: 2
pid 1688's current affinity list: 2
pid 1689's current affinity list: 2
pid 2258's current affinity list: 0-3
======= QEMU 32076 threads: =======
pid 32076's current affinity list: 0,1
pid 32094's current affinity list: 0,1
pid 32098's current affinity list: 0,1
pid 32099's current affinity list: 0,1
pid 2881's current affinity list: 0,1


79e8a31a-712c-4d1a-be9d-1f41a6da1300:~# for pid in $(pgrep qemu) ; do echo "======= QEMU $pid threads: =======" ; for spid in $(ps -T -o spid= -p "$pid" )
; do taskset -pc "$spid" ; done ;

done
======= QEMU 32076 threads: =======
pid 32076's current affinity list: 0-3
pid 32094's current affinity list: 0-3
pid 32098's current affinity list: 0-3
pid 32099's current affinity list: 0-3
pid 5918's current affinity list: 0-3
pid 6455's current affinity list: 0-3

@roja-zededa roja-zededa marked this pull request as ready for review October 23, 2024 20:16
@rene
Copy link
Contributor

rene commented Oct 25, 2024

@roja-zededa , in the commit title "pillar: Adopt pillar to use native cpupinning", I guess you meant: "pillar: Adapt pillar to use..."

@@ -242,7 +242,7 @@ type VmConfig struct {
ExtraArgs string // added to bootargs
BootLoader string // default ""
// For CPU pinning
CPUs string // default "", list of "1,2"
CPUs []int // default nil, list of [1,2]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@roja-zededa , this change implies on change the API as well: https://github.com/lf-edge/eve-api/blob/main/proto/config/vm.proto#L48

I'm not sure how this impacts the controller.... I wouldn't change the type, you can still keep as string and parse it accordingly....

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 changes on API would require changes on controller

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nikolay authored this commit. @OhmSpectator Could you please take a look at it?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cmd/zedagent/parseconfig.go parses the received protobuf and produces the golang structs such as this. Thus it can be made to parse the existing protbuf string format and produce an array of integers.
Thus no need to change the EVE API.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This field in the protobuf that @rene pointed to is not filled out by the controller at the moment. So we don't even have to change the parsing logic. It's currently used only internally in the DomainStatus.

Copy link
Member

@uncleDecart uncleDecart left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would also add test to domainmgr checking that CPUs were set as expected in VmConfig.CPUs (something like this) and that generated hypervisor configs are as expected

@OhmSpectator
Copy link
Member

======= QEMU 1669 threads: =======
pid 1669's current affinity list: 2
pid 1682's current affinity list: 2
pid 1683's current affinity list: 2
pid 1688's current affinity list: 2
pid 1689's current affinity list: 2
pid 2258's current affinity list: 0-3
======= QEMU 32076 threads: =======
pid 32076's current affinity list: 0,1
pid 32094's current affinity list: 0,1
pid 32098's current affinity list: 0,1
pid 32099's current affinity list: 0,1
pid 2881's current affinity list: 0,1

This looks strange as I would expect

> ======= QEMU 1669 threads: =======
> pid 1669's current affinity list: 2
> pid 1682's current affinity list: 2
> pid 1683's current affinity list: 2
> pid 1688's current affinity list: 2
> pid 1689's current affinity list: 2
> pid 2258's current affinity list: 0-3
> ======= QEMU 32076 threads: =======
> pid 32076's current affinity list: 0,1,3
> pid 32094's current affinity list: 0,1,3
> pid 32098's current affinity list: 0,1,3
> pid 32099's current affinity list: 0,1,3
> pid 2881's current affinity list: 0,1,3

But maybe the CPUs were redistributed a little bit later.

Copy link
Member

@OhmSpectator OhmSpectator left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@roja-zededa , thanks for the tests. The results look fine. Nevertheless, I would prefer to run the tests as they are described in the doc, as they also involve some corner cases.

Also, could you please run this snippet to check the affinities (should be run from the debug container with the procps package installed):

for pid in $(pgrep qemu); do 
    echo "======= QEMU $pid threads: ======="; 
    for spid in $(ps -T -o spid= -p "$pid"); do 
        echo -n "Thread $spid affinity: "; taskset -pc "$spid"; 
        cgpath=$(grep -m1 cpuset /proc/$spid/cgroup | cut -d: -f3)
        echo "Cgroup cpuset: $(cat /sys/fs/cgroup/cpuset${cgpath}/cpuset.cpus)"
    done; 
done

I want to see how it will look like for the non-VCPU threads, as they are now not affected by the CPU pinning:

======= QEMU 1669 threads: =======
pid 1669's current affinity list: 2
pid 1682's current affinity list: 2
pid 1683's current affinity list: 2
pid 1688's current affinity list: 2
pid 1689's current affinity list: 2
pid 2258's current affinity list: 0-3 <---- here
pid 4727's current affinity list: 2
pid 4728's current affinity list: 2
pid 4729's current affinity list: 2
pid 4762's current affinity list: 0-3 <---- here

I also want to understand, why most of the threads have affinity corresponding to the CPU mask and why not others... I expected VCPU thread to affected only, but we definetly see more. If you could, would be nice if you check how the new version of QEMU handles this option:

-object thread-context,id=tc1,...

may be we should do the same for tc0, or only for tc0?

@roja-zededa
Copy link
Contributor Author

======= QEMU 1669 threads: =======
pid 1669's current affinity list: 2
pid 1682's current affinity list: 2
pid 1683's current affinity list: 2
pid 1688's current affinity list: 2
pid 1689's current affinity list: 2
pid 2258's current affinity list: 0-3
======= QEMU 32076 threads: =======
pid 32076's current affinity list: 0,1
pid 32094's current affinity list: 0,1
pid 32098's current affinity list: 0,1
pid 32099's current affinity list: 0,1
pid 2881's current affinity list: 0,1

This looks strange as I would expect

> ======= QEMU 1669 threads: =======
> pid 1669's current affinity list: 2
> pid 1682's current affinity list: 2
> pid 1683's current affinity list: 2
> pid 1688's current affinity list: 2
> pid 1689's current affinity list: 2
> pid 2258's current affinity list: 0-3
> ======= QEMU 32076 threads: =======
> pid 32076's current affinity list: 0,1,3
> pid 32094's current affinity list: 0,1,3
> pid 32098's current affinity list: 0,1,3
> pid 32099's current affinity list: 0,1,3
> pid 2881's current affinity list: 0,1,3

But maybe the CPUs were redistributed a little bit later.

The deleted cpus are being reclaimed. Here is the trace.
QEMU 22745 - No CPU pin
QEMU 23728 - 1 CPU pinned
QEMU 24930 - 1CPU Pinned

======= QEMU 22745 threads: =======
pid 22745's current affinity list: 0,3
pid 22774's current affinity list: 0,3
pid 22778's current affinity list: 0,3
pid 22779's current affinity list: 0,3
pid 24456's current affinity list: 0,3
pid 25204's current affinity list: 0,3
pid 25216's current affinity list: 0-3
======= QEMU 23728 threads: =======
pid 23728's current affinity list: 1
pid 23742's current affinity list: 1
pid 23743's current affinity list: 1
pid 23747's current affinity list: 1
pid 23748's current affinity list: 1
pid 24056's current affinity list: 0-3
======= QEMU 24930 threads: =======
pid 24930's current affinity list: 2
pid 24944's current affinity list: 2
pid 24945's current affinity list: 2
pid 24949's current affinity list: 2
pid 24950's current affinity list: 2
pid 24952's current affinity list: 2
pid 25024's current affinity list: 0-3
pid 25025's current affinity list: 2
pid 25026's current affinity list: 2
pid 25027's current affinity list: 2

Removal:

Deletea VM

======= QEMU 22745 threads: =======
pid 22745's current affinity list: 0,1,3
pid 22774's current affinity list: 0,1,3
pid 22778's current affinity list: 0,1,3
pid 22779's current affinity list: 0,1,3
pid 27625's current affinity list: 0-3
======= QEMU 24930 threads: =======
pid 24930's current affinity list: 2
pid 24944's current affinity list: 2
pid 24945's current affinity list: 2
pid 24949's current affinity list: 2
pid 24950's current affinity list: 2
pid 25420's current affinity list: 0-3
pid 27674's current affinity list: 2

@roja-zededa
Copy link
Contributor Author

roja-zededa commented Oct 31, 2024

@OhmSpectator @rene Covered all the test cases in this document. The results look good to me. https://docs.google.com/document/d/1qoCq7SPk6UR2LCb6APVbmLLgo7W7wONJUncTrXCbPeM/edit?tab=t.0#heading=h.p1hte28jbxjx Please let me know if you have more questions.

@OhmSpectator
Copy link
Member

@roja-zededa, please address the rest of my comments here:
#4212 (review)

  1. I want to understand what are the the threads that run only on a given subset of CPUs and what are the threads that run on all CPUs. To understand it, please check how the thread contexts work. We use tc1, but we should understand which exactly threads are affected by it.

  2. Check not only the CPU affinities but the CPUset in cgroup. I have already provided a snippet for that:

for pid in $(pgrep qemu); do 
    echo "======= QEMU $pid threads: ======="; 
    for spid in $(ps -T -o spid= -p "$pid"); do 
        echo -n "Thread $spid affinity: "; taskset -pc "$spid"; 
        cgpath=$(grep -m1 cpuset /proc/$spid/cgroup | cut -d: -f3)
        echo "Cgroup cpuset: $(cat /sys/fs/cgroup/cpuset${cgpath}/cpuset.cpus)"
    done; 
done

@roja-zededa
Copy link
Contributor Author

@roja-zededa, please address the rest of my comments here: #4212 (review)

  1. I want to understand what are the the threads that run only on a given subset of CPUs and what are the threads that run on all CPUs. To understand it, please check how the thread contexts work. We use tc1, but we should understand which exactly threads are affected by it.
  2. Check not only the CPU affinities but the CPUset in cgroup. I have already provided a snippet for that:
for pid in $(pgrep qemu); do 
    echo "======= QEMU $pid threads: ======="; 
    for spid in $(ps -T -o spid= -p "$pid"); do 
        echo -n "Thread $spid affinity: "; taskset -pc "$spid"; 
        cgpath=$(grep -m1 cpuset /proc/$spid/cgroup | cut -d: -f3)
        echo "Cgroup cpuset: $(cat /sys/fs/cgroup/cpuset${cgpath}/cpuset.cpus)"
    done; 
done

@OhmSpectator Updated this doc:https://docs.google.com/document/d/1qoCq7SPk6UR2LCb6APVbmLLgo7W7wONJUncTrXCbPeM/edit?tab=t.0#heading=h.p1hte28jbxjx with the CPUset group info and CPU Affinity. Please refer to it. I don't see any issue with it so far.

@roja-zededa
Copy link
Contributor Author

@roja-zededa, please address the rest of my comments here: #4212 (review)

  1. I want to understand what are the the threads that run only on a given subset of CPUs and what are the threads that run on all CPUs. To understand it, please check how the thread contexts work. We use tc1, but we should understand which exactly threads are affected by it.
  2. Check not only the CPU affinities but the CPUset in cgroup. I have already provided a snippet for that:
for pid in $(pgrep qemu); do 
    echo "======= QEMU $pid threads: ======="; 
    for spid in $(ps -T -o spid= -p "$pid"); do 
        echo -n "Thread $spid affinity: "; taskset -pc "$spid"; 
        cgpath=$(grep -m1 cpuset /proc/$spid/cgroup | cut -d: -f3)
        echo "Cgroup cpuset: $(cat /sys/fs/cgroup/cpuset${cgpath}/cpuset.cpus)"
    done; 
done

@OhmSpectator Updated this doc:https://docs.google.com/document/d/1qoCq7SPk6UR2LCb6APVbmLLgo7W7wONJUncTrXCbPeM/edit?tab=t.0#heading=h.p1hte28jbxjx with the CPUset group info and CPU Affinity. Please refer to it. I don't see any issue with it so far.

@OhmSpectator @rene @eriknordmark Please kickoff the tests.

Copy link
Contributor

@eriknordmark eriknordmark left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Re-run tests

@OhmSpectator
Copy link
Member

@roja-zededa, thanks for the testsing! Interestingly, we enforce pinning more by setting CPUSet in a cgroup now rather than by setting affinity in QEMU. But let it be.

@OhmSpectator OhmSpectator merged commit d463c8e into lf-edge:master Nov 6, 2024
80 of 92 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants