Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Local deployment of magma experiment environment problems. #143

Open
ramerzase opened this issue Apr 3, 2023 · 3 comments
Open

Local deployment of magma experiment environment problems. #143

ramerzase opened this issue Apr 3, 2023 · 3 comments

Comments

@ramerzase
Copy link

Hi, I encountered a problem similar to this issue and I couldn't build magma according to the official manual (move all the experimental environments to docker). Therefore, I can only refer to the build.sh provided by magma on the local server to compile the program in magma. However, after a series of experiments using AFL, I found that the following information cannot be easily obtained:

  1. The number of triggered CVEs
  2. The specific id of the triggered CVE
  3. The time when a CVE is triggered

I referred to the official technical referrence and used exp2json.py to obtain the above three types of information, but the following error is reported:

FileNotFoundError: [Errno 2] No such file or directory: '/data/fuzz/res/ar/afl/libsndfile/sndfile_fuzzer/0/monitor'

My workdir folder structure is as follows:
workdir

In fact, I want to know:

  • What caused the above problem.(What's the monitor?)
  • The specific process of building the MAGMA environment on a local server.(Do I need run extra MAGMA-script when I run afl-fuzz?)
@ramerzase
Copy link
Author

ramerzase commented Apr 3, 2023

By the way, I can understand that the maintainers of magma will question why I didn't follow the official magma deployment tutorial to complete my experiment after seeing the questions I raised these days. But I think I'm not the only one with similar problems.

I think 99% of the people who use MAGMA are security researchers who contribute to Usenix, okland, NDSS or CCS (because other conferences do not have such strict requirements for fuzzing evaluation). There are many people who can only choose to deploy MAGMA locally due to technical problems (such as using hardware to assist fuzzing, modifying the instrumentation) or some other irresistible factors (such as a poor fuzzing researcher from China) to complete their experiments. At this time, it may be less than 30 days before the minor revision or the next ddl of the top conference. In order to make his or her fuzzer run smoothly on MAGMA, he or she have to spend 5~7 days to study MAGMA's code, referring to problems encountered by others before he or she can successfully run the fuzzer on MAGMA. Unfortunately, he or she found that there was no way to use official scripts to automate the analysis of crashes, so they had to manually use gdb to analyze hundreds of crash-triggering inputs and compare them with the CVE given in MAGMA. It was such a pain.
I want to say why not let everyone who uses magma build the environment locally like lava-m(only need to replay the seed that triggers the crash into the binary of the target program to get the bug id)?

@adrianherrera
Copy link
Member

Hi @ramerzase,

I understand your point on needing to run the experiments outside of Docker. We will happily consider PRs that provide this capability. However, we do not have the time/resources to do this ourselves (for the same reasons you noted, i.e., that we are also grad students focusing on completing our research).

@hazimeh
Copy link
Member

hazimeh commented Apr 4, 2023

I recently had an e-mail exchange inquiring about this issue. While there is currently no OOTB solution to running Magma locally, I can echo the instructions I had provided earlier, to ensure that a local setup satisfies the assumptions and produces the files required for Magma scripts.

To recreate the environment settings expected by Magma, you would need to:

  • apply the bug patches to the cloned target repos
  • install the required dependencies for each target before compilation (see: preinstall.sh)
  • compile Magma's runtime library (see magma/build.sh); you need not define the MAGMA_STORAGE macro yet, because it could also be read from the environment
  • compile and instrument each target with compilation flags matching those in build.sh of each target, keeping in mind that Magma imposes additional compiler flags (see: docker/Dockerfile:75)
  • configure your fuzzer to export a MAGMA_STORAGE environment variable before launching the target, as a unique shmem name for each campaign (it's a memory-mapped file that is accessed by canaries and by the monitor)
  • compile and run the monitor in a loop, in parallel with each fuzzing campaign, making sure to provide it with the path which MAGMA_STORAGE points to

This would allow you to reproduce much of the time-based evaluation of Magma outside its Docker images.

Alternatively, if you're only interested in post-processing/deduplicating your crashes directory (as you've described in your post), you can disregard all Magma customizations for your fuzzing campaigns (i.e. just compile the original target without patches or runtime support).
Then, once the campaigns are done and you have crashing PoCs, you can compile the Magma-instrumented targets and then invoke the monitor with the --fetch watch flag, passing it the cmd args that launch the instrumented target with the PoC you would like to triage.
The monitor would then output which Magma bugs were reached and/or triggered. This behavior is also described in the monitor's Technical Reference. Keep in mind that this does not provide you with realtime/timestamped logs, but only serves to validate whether your PoCs triggered Magma-injected bugs.
This approach, however, cannot be used with exp2json.py, since this script generates time-series plots, and this information can only be collected at runtime. You may attempt to recreate the expected directory layout by using PoC file creation timestamps, but that is then out of scope.

I hope these tips somewhat help guide you towards a more complete evaluation!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants