-
Notifications
You must be signed in to change notification settings - Fork 127
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reuse existing build containers when testing auto-generated harnesses #499
Comments
DavidKorczynski
added a commit
that referenced
this issue
Aug 10, 2024
First touch on #499 Depends on: google/oss-fuzz#12284 The way this work is by saving a cached version of `build_fuzzers` post running of `compile` and then modifying the Dockerfiles of a project to use this cached build image + an adjusted build script. For example, for brotli the Dockerfile is originally: ```sh FROM gcr.io/oss-fuzz-base/base-builder RUN apt-get update && apt-get install -y cmake libtool make RUN git clone --depth 1 https://github.com/google/brotli.git WORKDIR brotli COPY build.sh $SRC/ COPY 01.c /src/brotli/c/fuzz/decode_fuzzer.c ``` a Dockerfile is then created which relies on the cached version, and it loosk like: ```sh FROM cached_image_brotli # RUN apt-get update && apt-get install -y cmake libtool make # # RUN git clone --depth 1 https://github.com/google/brotli.git # WORKDIR brotli # COPY build.sh $SRC/ # COPY 01.c /src/brotli/c/fuzz/decode_fuzzer.c # COPY adjusted_build.sh $SRC/build.sh ``` `adjusted_build.sh` is then the script that only builds fuzzers. This means we can also use `build_fuzzers`/`compile` workflows as we know it. More specifically, this PR: - Makes it possible to build Docker images of fuzzer build containers. Does this by running `build_fuzzers`, saving the docker container and then commit the docker container to an image. This image will have a projects' build set up post running of `compile`. This is then used when building fuzzers by OFG. - Supports only ASAN mode for now. Should be easy to extend to coverage too. - Currently builds images first and then uses them locally. We could extend, probably on another step of this, to use containers pushed by OSS-Fuzz itself. - Only does the caching if a "cache-build-script" exists (added a few for some projects) which contains the build instructions post-build process. It should be easy to extend such that we can rely on some DB of auto-generated build scripts as well (ref: google/oss-fuzz#11937) but I think it's nice to have both the option of us creating the scripts ourselves + an auto-generated DB. --------- Signed-off-by: David Korczynski <david@adalogics.com>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
For each generated harness we currently build the project's entire build process each time we test the harness, including fixes. However, the only difference between the existing build artifacts and the one we test is the harness file, i.e. the difference is quite minimal. It would be great to speed up the process by having an architecture that allows us to only rebuild the harness and no the whole project.
High-level proposed solution:
We save the containers on OSS-Fuzz and then use these containers as source for rebuilding an auto-generated harness. The rebuilding will be done by having some logic that runs a
rebuild_fuzzer.sh
script. This script will be selected based on a scripts database we have stored, and this database can have both manually entered values (i.e.rebuild_fuzzer.sh
script we manually create) or script generated using automated approaches e.g. google/oss-fuzz#11937. Having the ability to add manual scripts will be useful in cases where auto-generation ofrebuild_fuzzer.sh
scripts are insufficient.Todos:
rebuild_fuzzer.sh
The text was updated successfully, but these errors were encountered: