Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

rust/kernel: remove config #ifdef in Error.rs file #346

Merged
merged 1 commit into from
Jun 4, 2021

Conversation

TheSven73
Copy link
Collaborator

The use of #ifdef CONFIG_ statements in .c/.rs files is deprecated:
it makes the code much more unmaintainable over time. See:
https://lkml.org/lkml/2021/6/3/564

Use a Rust-C helper instead to leverage automatic CONFIG selection
in C kernel headers.

Signed-off-by: Sven Van Asbroeck thesven73@gmail.com

@ksquirrel

This comment has been minimized.

@ksquirrel

This comment has been minimized.

@ojeda
Copy link
Member

ojeda commented Jun 4, 2021

Good idea to defer to the C function!

However, note that conditional compilation is not deprecated -- Greg is saying it should be avoided in .c files in particular unless there is a good reason for it, which is guidance from Doc/process/coding-style.rst [*].

Also note that .rs files are both implementation files and headers at the same time, so sometimes there is no way but to use conditional compilation.

[*] Which is actually badly worded in Doc/process/coding-style.rst: conditional compilation in headers is actually worse because it infects everyone including it, creating more compile-time branches. What the document is actually trying to explain is that one should not expect callers to use conditional compilation, and instead give the callers a noop stub. But if your conditional compilation is private to an implementation file, one really should not move it to a header just to follow the rule -- just create the stub privately. (The next paragraphs clarify a bit, but still, the first sentence of that section could really be improved because it is misleading).

@TheSven73
Copy link
Collaborator Author

Absolutely, conditional compilation is unavoidable in, say. lib.rs. And yes, "unless good reason" definitely applies.

So should I reword the commit message from "deprecated" to "should be avoided"?

PS There's another conditional compilation in mutex.rs that I am hesitant to touch:

#[cfg(not(CONFIG_DEBUG_LOCK_ALLOC))]
fn lock_noguard(&self) {
// SAFETY: `mutex` points to valid memory.
unsafe { bindings::mutex_lock(self.mutex.get()) };
}
#[cfg(CONFIG_DEBUG_LOCK_ALLOC)]
fn lock_noguard(&self) {
// SAFETY: `mutex` points to valid memory.
unsafe { bindings::mutex_lock_nested(self.mutex.get(), 0) };
}

What's your opinion on that one? People are going to say "performance", but do we have any data to back that up? Premature optimization etc :)

@ojeda
Copy link
Member

ojeda commented Jun 4, 2021

So should I reword the commit message from "deprecated" to "should be avoided"?

Yeah, please! Or just remove that line -- simplifying conditional compilation is always a good idea even without further arguments.

What's your opinion on that one?

In general, unless we have proof of bad performance, please prefer simplicity.

In particular, for this case, I would simplify that one too. Performance should be the same, because both calls are black boxes without cross-language LTO (and with it, it should not matter).

But I let @wedsonaf comment in case I am missing something -- he can easily test & compare (he has some benchmarks for Rust Binder).

@nbdd0121
Copy link
Member

nbdd0121 commented Jun 4, 2021

It's a shame that bindgen can't handle static inline functions and c2rust can't handle macros :(

It would save us a lot hassle if we can just call into static inline functions.

The use of `#ifdef CONFIG_` statements in .c/.rs files should be
avoided: it makes the code much more unmaintainable over time. See:
https://lore.kernel.org/lkml/YLjWKwhp7akqyR1S@kroah.com/

Use a Rust-C helper instead to leverage automatic `CONFIG` selection
in C kernel headers.

Signed-off-by: Sven Van Asbroeck <thesven73@gmail.com>
@ksquirrel
Copy link
Member

Review of 40676ff8ffc7:

  • ✔️ Commit 40676ff: Looks fine!

@TheSven73
Copy link
Collaborator Author

v1 -> v2

ojeda:

  • change "deprecated" -> "should be avoided" in commit msg

@nbdd0121
Copy link
Member

nbdd0121 commented Jun 4, 2021

Idea: can we do some text processing, e.g. look into binding_generated.rs and generate a helper function only if it's missing in the generated binding?

E.g. add these only if errname is not showing up in binding_generated.rs:

helper_generated.c

const char *rust_helper_errname(int err)
{
	return errname(err);
}

binding_helper_generated.rs

extern "C" {
    #[link_name = "rust_helper_errname"]
    fn errname(err: c_types::c_int) -> *const c_types::c_char;
}

@TheSven73
Copy link
Collaborator Author

TheSven73 commented Jun 4, 2021

Won't this issue go away on its own, once we can build the kernel with ThinLTO? This is already possible in v5.13-rc4.

I tried to merge v5.13-rc4 into rust but I'm getting merge conflicts in the Makefile that I don't know enough about. Maybe one of @ojeda @nbdd0121 could take a look? If we can merge, we can switch on ThinLTO in defconfig and that will probably fix the problem in one fell swoop, no?

@nbdd0121
Copy link
Member

nbdd0121 commented Jun 4, 2021

There are two issues with current approach, one is usability and one is performance:

  • For performance, it will go away with LTO. But that won't work if GCC is used to compile C. Also, we don't want to require LTO.
  • For usability though, I think adding a helper plus its Rust extern definition manually is quite painful, and LTO won't solve that; having some mechanism for easier exposure of macro/inline function is more ideal.

@ojeda
Copy link
Member

ojeda commented Jun 4, 2021

I tried to merge v5.13-rc4 into rust but I'm getting merge conflicts in the Makefile that I don't know enough about.

Sure, I will work on updating the repo to the latest -rc after the meeting.

@TheSven73
Copy link
Collaborator Author

TheSven73 commented Jun 4, 2021

LTO won't solve that; having some mechanism for easier exposure of macro/inline function is more ideal.

Good point. Should we move that discussion to a separate Issue or PR ?

@ojeda
Copy link
Member

ojeda commented Jun 4, 2021

Should we move that discussion to a separate Issue or PR ?

Sounds good to me.

@ojeda
Copy link
Member

ojeda commented Jun 4, 2021

Done: #352

@ojeda ojeda merged commit f9dc4fc into Rust-for-Linux:rust Jun 4, 2021
@TheSven73 TheSven73 deleted the rust-for-linux-no-ifdefs branch June 5, 2021 14:45
@wedsonaf
Copy link

wedsonaf commented Jun 8, 2021

In general, unless we have proof of bad performance, please prefer simplicity.

Yes, I like this as a general guideline: let's choose simplicity unless there's a regression in a metric we care about; performance improvements that complicate the design/implementation should be justified with an improved metric we care about.

But I let @wedsonaf comment in case I am missing something -- he can easily test & compare (he has some benchmarks for Rust Binder).

Oddly enough, #357 seems to have improved performance. The following is a ping test: it sends transaction to a server and waits for a response; the server just responds as soon as it can to any transactions it gets. The tests repeats this 1M times and measures how long it takes.

Before #357:

root@wedsonaf-vm:/home/wedsonaf/tests# ./ping-client ./rust_binder
Total time: 10.131209588
Close result: 0
root@wedsonaf-vm:/home/wedsonaf/tests# ./ping-client ./rust_binder
Total time: 9.949979204
Close result: 0
root@wedsonaf-vm:/home/wedsonaf/tests# ./ping-client ./rust_binder
Total time: 10.035134993
Close result: 0
root@wedsonaf-vm:/home/wedsonaf/tests# ./ping-client ./rust_binder
Total time: 10.039153400
Close result: 0
root@wedsonaf-vm:/home/wedsonaf/tests# ./ping-client ./rust_binder
Total time: 10.009915560
Close result: 0

After #357:

root@wedsonaf-vm:/home/wedsonaf/tests# ./ping-client ./rust_binder
Total time: 9.592539875
Close result: 0
root@wedsonaf-vm:/home/wedsonaf/tests# ./ping-client ./rust_binder
Total time: 9.543542464
Close result: 0
root@wedsonaf-vm:/home/wedsonaf/tests# ./ping-client ./rust_binder
Total time: 9.581248170
Close result: 0
root@wedsonaf-vm:/home/wedsonaf/tests# ./ping-client ./rust_binder
Total time: 9.600895248
Close result: 0
root@wedsonaf-vm:/home/wedsonaf/tests# ./ping-client ./rust_binder
Total time: 9.573040549
Close result: 0

I'm converting my binder tests to kernel selftests, I'll convert this one as well so others can also run it.

@ojeda
Copy link
Member

ojeda commented Jun 8, 2021

Definitely odd, and quite consistent and significant (~5%).

I'm converting my binder tests to kernel selftests, I'll convert this one as well so others can also run it.

🎉

@TheSven73
Copy link
Collaborator Author

TheSven73 commented Jun 8, 2021

Yes, I like this as a general guideline: let's choose simplicity unless there's a regression in a metric we care about; performance improvements that complicate the design/implementation should be justified with an improved metric we care about.

💯

I'm converting my binder tests to kernel selftests, I'll convert this one as well so others can also run it.

🥇

@ojeda
Copy link
Member

ojeda commented Jun 8, 2021

By the way, now that #361 is in, we could write #[bench]marks too. Not sure if we want to though, because 1) we would depend on feature(test) again and 2) they run in the host, which limits their use. Nevertheless, they could be useful to test data structures and things like that.

In any case, I want to have a way to perform benchmarks that looks as simple as #[bench] (similar arguments as for #[test] etc.) -- but let's focus on that a bit later on.

@TheSven73
Copy link
Collaborator Author

👍 It's crucially important to be able to run benchmarks or self-tests on platforms such as Raspberry Pi etc. It would be awesome if there was a way to accomplish this via a #[bench] #[test] like mechanism.

@ojeda
Copy link
Member

ojeda commented Jun 8, 2021

It's crucially important to be able to run benchmarks or self-tests on platforms such as Raspberry Pi etc. It would be awesome if there was a way to accomplish this via a #[bench] #[test] like mechanism.

Well, if you run make rusttest in the Raspberry Pi itself, that already works! :P (half joking here: it looks like the RPi can compile the kernel just fine, or at least some of the models).

But yeah, I want to have something like #[ktest] and #[kbench] that auto-magically runs the test at boot time in kernel space and has access to the kernel crate etc.

@bjorn3
Copy link
Member

bjorn3 commented Jun 8, 2021

On nightly you can use #![reexport_test_harness_main = "test_main"] to test #![no_std] crates. See for example https://os.phil-opp.com/testing/ on how to test a kernel.

@TheSven73
Copy link
Collaborator Author

@wedsonaf Does your benchmark kernel have CONFIG_DEBUG_LOCK_ALLOC ?

@wedsonaf
Copy link

wedsonaf commented Jun 8, 2021

@wedsonaf Does your benchmark kernel have CONFIG_DEBUG_LOCK_ALLOC ?

No, the following is the "lock debugging" section of my .config:

#
# Lock Debugging (spinlocks, mutexes, etc...)
#
CONFIG_LOCK_DEBUGGING_SUPPORT=y
# CONFIG_PROVE_LOCKING is not set
# CONFIG_LOCK_STAT is not set
# CONFIG_DEBUG_RT_MUTEXES is not set
# CONFIG_DEBUG_SPINLOCK is not set
# CONFIG_DEBUG_MUTEXES is not set
# CONFIG_DEBUG_WW_MUTEX_SLOWPATH is not set
# CONFIG_DEBUG_RWSEMS is not set
# CONFIG_DEBUG_LOCK_ALLOC is not set
# CONFIG_DEBUG_ATOMIC_SLEEP is not set
# CONFIG_DEBUG_LOCKING_API_SELFTESTS is not set
# CONFIG_LOCK_TORTURE_TEST is not set
# CONFIG_WW_MUTEX_SELFTEST is not set
# CONFIG_SCF_TORTURE_TEST is not set
# CONFIG_CSD_LOCK_WAIT_DEBUG is not set
# end of Lock Debugging (spinlocks, mutexes, etc...)

# CONFIG_DEBUG_IRQFLAGS is not set
CONFIG_STACKTRACE=y
# CONFIG_WARN_ALL_UNSEEDED_RANDOM is not set
CONFIG_DEBUG_KOBJECT=y

The test is actually quite simple. Let me upload it as is so you can try it out if you'd like.

@wedsonaf
Copy link

wedsonaf commented Jun 8, 2021

The test is here: wedsonaf@76001a4

You can build as follows (from the kernel git root):

make -C tools/testing/selftests TARGETS=binder

To run the test, first you need to run the server as follows:

./ping_server <device-name>

And leave it running. Then you can ran the client as many times as need:

./ping_client <device-name>

In both cases, <device-name> is a device you create with mknod to point to the Rust Binder device. You can also run the test against the C implementation: in this case, assuming you run mount -t binder nodev tmp to mount a new binder instance in tmp, the device name would be tmp/binder.

@TheSven73
Copy link
Collaborator Author

I'm having some issues with the benchmark, created a separate issue #365

@nbdd0121
Copy link
Member

nbdd0121 commented Jun 8, 2021

Oddly enough, #357 seems to have improved performance. The following is a ping test: it sends transaction to a server and waits for a response; the server just responds as soon as it can to any transactions it gets. The tests repeats this 1M times and measures how long it takes.

Sounds like an issue where a mutex is highly concurrently accessed, and slowing down mutex_lock causing the fast path to be hit more often. Can you try without #357 but add a few NOPs before calling binding::mutex_lock? (Or maybe just run the benchmark with single core).

I tried to run the benchmark myself but ran into BUG as well.

@ojeda
Copy link
Member

ojeda commented Jun 8, 2021

On nightly you can use #![reexport_test_harness_main = "test_main"] to test #![no_std] crates. See for example https://os.phil-opp.com/testing/ on how to test a kernel.

Ah, then it can be easier than expected! Thanks! I will take a look.

@nbdd0121
Copy link
Member

nbdd0121 commented Jun 9, 2021

Oddly enough, #357 seems to have improved performance. The following is a ping test: it sends transaction to a server and waits for a response; the server just responds as soon as it can to any transactions it gets. The tests repeats this 1M times and measures how long it takes.

Sounds like an issue where a mutex is highly concurrently accessed, and slowing down mutex_lock causing the fast path to be hit more often. Can you try without #357 but add a few NOPs before calling binding::mutex_lock? (Or maybe just run the benchmark with single core).

I tried to run the benchmark myself but ran into BUG as well.

So I've tried the benchmark with cycle-level simulation in RISC-V, with and without #357, here's the result (3 runs, averaged, all in seconds):

Without #357 With #357 Difference
4 cores 11.074 11.105 +0.28%
1 core 10.927 10.944 +0.16%
Difference -1.32% -1.45%

So for my configuration, #357 does regress performance (but very tiny). A more interesting result is that performance improves when I reduce core count to just 1 as I expected. I wonder if any of you can reproduce the result?

@TheSven73
Copy link
Collaborator Author

To know if differences as tiny as that are meaningful (as opposed to mere noise), we probably need multiple samples, combined with a statistical test for equal means.

@nbdd0121
Copy link
Member

nbdd0121 commented Jun 9, 2021

To know if differences as tiny as that are meaningful (as opposed to mere noise), we probably need multiple samples, combined with a statistical test for equal means.

The runs are all in cycle-level simulation and there are no external noises. The standard deviation are quite small, and two-sample unpaired t test gives P value of 0.03 (for 1 core case), so the result is statistically significant.

@TheSven73
Copy link
Collaborator Author

the result is statistically significant.

Thanks! It's awesome to determine that properly ! @wedsonaf on what platform did you observe the 5% speedup, x86?

@wedsonaf
Copy link

wedsonaf commented Jun 9, 2021

the result is statistically significant.

Thanks! It's awesome to determine that properly ! @wedsonaf on what platform did you observe the 5% speedup, x86?

Yes, x86-64 running on KVM with a single core, so external factors may have influenced the result. I can try again tomorrow.

(I had never seen the benchmark run below 10s before today, though I hadn't run it in a while...)

@TheSven73
Copy link
Collaborator Author

TheSven73 commented Jun 9, 2021

On my arm32 system (cortex-a9) #357 seems to make no difference. Everything is a factor 20 slower (!) than what you guys are observing on your systems. So I'll rebuild the ping client/server with -O3 and try again.

(ETA: -O3 made no difference)

@nbdd0121
Copy link
Member

nbdd0121 commented Jun 9, 2021

I tweaked the number of iterations, and it happens (by pure luck) to be roughly 10 seconds :)

@wedsonaf
Copy link

wedsonaf commented Jun 9, 2021

I ran again, still see an improvement:

At commit 5bb76a6:

root@wedsonaf-vm:/home/wedsonaf/tests# ./ping_client ./rust_binder
Total time: 9.657316220
root@wedsonaf-vm:/home/wedsonaf/tests# ./ping_client ./rust_binder
Total time: 9.732271775
root@wedsonaf-vm:/home/wedsonaf/tests# ./ping_client ./rust_binder
Total time: 9.697761638
root@wedsonaf-vm:/home/wedsonaf/tests# ./ping_client ./rust_binder
Total time: 9.615642174
root@wedsonaf-vm:/home/wedsonaf/tests# ./ping_client ./rust_binder
Total time: 9.795218516

At 5bb76a6 + revert of a8553d8:

root@wedsonaf-vm:/home/wedsonaf/tests# ./ping_client ./rust_binder
Total time: 10.009606552
root@wedsonaf-vm:/home/wedsonaf/tests# ./ping_client ./rust_binder
Total time: 10.192609879
root@wedsonaf-vm:/home/wedsonaf/tests# ./ping_client ./rust_binder
Total time: 9.992817305
root@wedsonaf-vm:/home/wedsonaf/tests# ./ping_client ./rust_binder
Total time: 10.047126409
root@wedsonaf-vm:/home/wedsonaf/tests# ./ping_client ./rust_binder
Total time: 10.271573930

I'm attaching my .config (renamed to config.txt so that github allows it) -- I think beyond the rust/binder stuff I only have KMEMLEAK enabled that wasn't in defconfig. I'll try again after disabling it.

@nbdd0121
Copy link
Member

nbdd0121 commented Jun 9, 2021

Can you try with CONFIG_PREEMPT_NONE?

@TheSven73
Copy link
Collaborator Author

TheSven73 commented Jun 9, 2021

I don't think we should be surprised by changes that affect platforms unequally. Or even in the opposite direction :)

Not sure why adding a rust_helper would speed up x86, but slow down RISC-V. Could be related to function inlining. Sometimes function inlining is slower, because bigger code takes up more of the instruction cache. The compiler is supposed to know the sweet spot. Better than humans anyway.

@wedsonaf
Copy link

wedsonaf commented Jun 9, 2021

Without KMEMLEAK, I do see a regression:

At commit 5bb76a6 (average 4.6901051054):

root@wedsonaf-vm:/home/wedsonaf/tests# ./ping_client ./rust_binder
Total time: 4.694596770
root@wedsonaf-vm:/home/wedsonaf/tests# ./ping_client ./rust_binder
Total time: 4.721078787
root@wedsonaf-vm:/home/wedsonaf/tests# ./ping_client ./rust_binder
Total time: 4.741895176
root@wedsonaf-vm:/home/wedsonaf/tests# ./ping_client ./rust_binder
Total time: 4.645857380
root@wedsonaf-vm:/home/wedsonaf/tests# ./ping_client ./rust_binder
Total time: 4.647097414

At 5bb76a6 + revert of a8553d8 (average 4.4586401322):

root@wedsonaf-vm:/home/wedsonaf/tests# ./ping_client ./rust_binder
Total time: 4.430263982
root@wedsonaf-vm:/home/wedsonaf/tests# ./ping_client ./rust_binder
Total time: 4.473422230
root@wedsonaf-vm:/home/wedsonaf/tests# ./ping_client ./rust_binder
Total time: 4.456249542
root@wedsonaf-vm:/home/wedsonaf/tests# ./ping_client ./rust_binder
Total time: 4.431005058
root@wedsonaf-vm:/home/wedsonaf/tests# ./ping_client ./rust_binder
Total time: 4.502259849

KMEMLEAK has quite an impact on this benchmark...

@TheSven73
Copy link
Collaborator Author

TheSven73 commented Jun 9, 2021

Edited: nevermind

@wedsonaf
Copy link

wedsonaf commented Jun 9, 2021

Can you try with CONFIG_PREEMPT_NONE?

Regression from 5.055406773 to 5.1458177946. Slower than with CONFIG_PREEMPT_VOLUNTARY.

@TheSven73
Copy link
Collaborator Author

So what's the conclusion here? Is the regression not acceptable? If not, how do we fix? Would ThinLTO be preferable to reverting the PR?

@nbdd0121
Copy link
Member

nbdd0121 commented Jun 9, 2021

I don't think we should revert that. #359 can resolve the issue without using #ifdefs.

I think the only conclusion we can make currently is that binder benchmark isn't a reliable indicator for other changes, and we probably need a more deterministic (sequential) benchmark.

@ojeda
Copy link
Member

ojeda commented Jun 9, 2021

So what's the conclusion here? Is the regression not acceptable? If not, how do we fix? Would ThinLTO be preferable to reverting the PR?

It would be good to do a test with ThinLTO too. Let me update the kernel today.

@TheSven73
Copy link
Collaborator Author

It would be good to do a test with ThinLTO too. Let me update the kernel today.

Mainline supports ThinLTO on x86 and arm64 only. So no arm 32-bit or RISC-V support for myself or Gary for now...

ojeda pushed a commit that referenced this pull request Jun 11, 2024
Add a test case to assert that the skb->pkt_type which was set from the BPF
program is retained from the netkit xmit side to the peer's device at tcx
ingress location.

  # ./vmtest.sh -- ./test_progs -t netkit
  [...]
  ./test_progs -t netkit
  [    1.140780] bpf_testmod: loading out-of-tree module taints kernel.
  [    1.141127] bpf_testmod: module verification failed: signature and/or required key missing - tainting kernel
  [    1.284601] tsc: Refined TSC clocksource calibration: 3408.006 MHz
  [    1.286672] clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd9b189d, max_idle_ns: 440795225691 ns
  [    1.290384] clocksource: Switched to clocksource tsc
  #345     tc_netkit_basic:OK
  #346     tc_netkit_device:OK
  #347     tc_netkit_multi_links:OK
  #348     tc_netkit_multi_opts:OK
  #349     tc_netkit_neigh_links:OK
  #350     tc_netkit_pkt_type:OK
  Summary: 6/0 PASSED, 0 SKIPPED, 0 FAILED

Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/r/20240524163619.26001-4-daniel@iogearbox.net
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

6 participants