Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add option sem.unknown_function.invalidate.args #707

Merged
merged 3 commits into from
May 3, 2022

Conversation

sim642
Copy link
Member

@sim642 sim642 commented Apr 27, 2022

This is currently on top of #695 so I can test both together on zstd.

@sim642 sim642 added feature in progress precision benchmarking pr-dependency Depends or builds on another PR, which should be merged before labels Apr 27, 2022
@sim642 sim642 self-assigned this Apr 27, 2022
@sim642
Copy link
Member Author

sim642 commented Apr 27, 2022

Compared to #696 (comment) this makes the following difference:

 Summary for all memory locations:
-	safe:         1736
-	vulnerable:    68
-	unsafe:        870
-	-------------------
-	total:        2674
+	safe:         1978
+	vulnerable:     12
+	unsafe:        630
+	-------------------
+	total:        2620

There's still a quite big number of write accesses from job.function(job.opaque); where the unknown function pointer is possibly called. So maybe not invalidating isn't enough, but the access analysis also needs to have a way of excluding these (possibly under a different option name).

EDIT: Actually there's no need for a separate option. The access analysis should use the same invalidate.args option to emit write accesses (which would correspond exactly to invalidating writes). Read accesses can still be safely emitted.

EDIT 2: The last point does get rid of some indeed, so I'm committing that here.

@sim642
Copy link
Member Author

sim642 commented Apr 28, 2022

Regarding the remaining zstd races, I don't see any easy way for us to exclude them, because there seems to be a lot of value-based exclusion going on. For example, there's 222 races to all the offsets of (alloc@sid:8732), lib/compress/zstd_compress.c:114:7-114:90).

One of them is the following:

[Warning][Race] Memory location ((alloc@sid:8732), lib/compress/zstd_compress.c:114:7-114:90).initialized@lib/compress/zstd_compress.c:114:7-114:90 (race with conf. 110):
  write with mhp:{tid=POOL_thread; created=All Threads} (conf. 110) (lib/compress/zstd_compress.c:1992:9-1992:28)
  write with [mhp:{tid=main; created=All Threads}, thread:main] (conf. 110) (lib/compress/zstd_compress.c:1992:9-1992:28)

I believe it's not possible for writes to initialized to happen concurrently:

  1. The compression is either started in the main thread directly or entirely in the pool, so the two cannot race. But separating them requires recording accesses based on some other field, which determines, which mode is being used.
  2. When compression is started in the pool, a special firstJob field is used (which is true for only one thread) and that thread has the responsibility of doing some initialization. By construction, only one struct in an array of structs has that field true, but that's an especially complex non-racing condition.

And on top of all of that, it all needs to be symbolic as well because even if multiple compressions (with multiple pools, etc.) are started at once, they don't share the context structs, so there shouldn't be any races between multiple simultaneous compressions. It's just that for each concrete instance of the allocated context, the array of jobs contains exactly one element where a field is true and all the initializing accesses are thus non-racing.

Base automatically changed from access-free to master May 3, 2022 14:48
@sim642 sim642 removed the pr-dependency Depends or builds on another PR, which should be merged before label May 3, 2022
@sim642 sim642 merged commit fb54fa5 into master May 3, 2022
@sim642 sim642 deleted the unknown-invalidate-args branch May 3, 2022 14:48
@sim642 sim642 added this to the v2.0.0 milestone Aug 12, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants