8000 Primitive malloc freshness analysis by sim642 · Pull Request #690 · goblint/analyzer · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

Primitive malloc freshness analysis #690

New issue
8000

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 8 commits into from
Apr 20, 2022
Merged

Primitive malloc freshness analysis #690

merged 8 commits into from
Apr 20, 2022

Conversation

sim642
Copy link
Member
@sim642 sim642 commented Apr 18, 2022

This is a very primitive malloc freshness analysis that should be sufficient for the initializations in zstd and chrony.

Changes

  1. Add mallocFresh analysis, which keeps a must-set of fresh alloc variables, which are used for excluding races. The set is very easily forgotten to hopefully ensure soundness.
  2. Add zstd custom allocation functions to LibraryFunctions. This is necessary because due to some invalidation somewhere, unknown pointer gets added to possible values, despite the program only ever using custom allocators with NULL. These are only used when added to exp.extraspecials.
  3. (Add conf for analyzing zstd races such that all spurious races are now gone!) To be readded to interactive.
  4. Handle address set result in base branch.
  5. Handle address set in base logical not expression.

TODO

  • Figure out why doesn't work on full zstd. Due to custom allocator? Yes.
  • Check if works on chrony. @vesalvojdani
  • Rebase to master.
  • Readd zstd-race conf when merging to interactive.

@vesalvojdani
Copy link
Member

Yes, it eliminates all races for Chrony.

@sim642
Copy link
Member Author
sim642 commented Apr 19, 2022

Using the included conf, all the spurious zstd races are gone!

[Warning][Race] Memory location ((alloc@sid:15485), lib/common/pool.c:122:5-122:68)[?].threadLimit@lib/common/pool.c:122:5-122:68 (race with conf. 110):
  read with [symblock:{p-lock:*.queueMutex}, mhp:{tid=POOL_thread; created=All Threads}] (conf. 110) (lib/common/pool.c:74:9-84:9)
  write with [mhp:{tid=main; created=All Threads}, thread:main] (conf. 110) (lib/common/pool.c:158:9-158:38)

Summary for all memory locations:
        safe:         2390
        vulnerable:      0
        unsafe:          1
        -------------------
        total:        2391

EDIT: Actually the remaining race is also spurious, but for rather subtle reasons: the read happens while ctx->queueEmpty == 1 and the write happens while ctx->queueEmpty == 0. So some kind of symbolic privatization and access partitioning by those symbolically privatized values might be able to rule it out, but that's too far-fetched for the time being.

@sim642 sim642 marked this pull request as ready for review April 19, 2022 12:01
@sim642 sim642 mentioned this pull request Apr 19, 2022
@sim642 sim642 changed the base branch from interactive to master April 20, 2022 12:25
@sim642 sim642 merged commit cc1d9cf into master Apr 20, 2022
@sim642 sim642 deleted the malloc-fresh branch April 20, 2022 12:53
@sim642 sim642 added this to the v2.0.0 milestone Aug 12, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants
0