Skip to content

Latest commit

 

History

History
102 lines (80 loc) · 10 KB

tab_tool_support.md

File metadata and controls

102 lines (80 loc) · 10 KB
title displaytext layout tab order tags
Tool_Support
Tool Support/Results
true
4
benchmark

The results, from several years ago, for 5 free tools, PMD, FindBugs, FindBugs with the FindSecBugs plugin, SonarQube, and ZAP are available here against version 1.2 of the Benchmark: https://github.com/OWASP-Benchmark/BenchmarkJava/blob/master/scorecard/OWASP_Benchmark_Home.html. You'll have to clone this Git repo and open the file locally. We included multiple versions of FindSecBugs' and ZAP's results so you can see the improvements they made finding vulnerabilities in Benchmark.

We have Benchmark results for all the following tools, but haven't publicly released the results for any commercial tools. However, we included a 'Commercial Average' page, which includes a summary of results for 6 commercial SAST tools in 2016 along with anonymous versions of each SAST tool's scorecard.

The Benchmark can generate results for the following tools:

Free Static Application Security Testing (SAST) Tools (Both Open Source and Commercial):

Many of the free Open Source SAST tools come bundled with Benchmark so you can run them yourselves. Simply run script/runTOOLNAME.(sh/bat) and it puts the results into the /results directory automatically. There are scripts for running PMD, FindBugs, SpotBugs, and FindSecBugs.

Note: We looked into supporting Checkstyle but it has no security rules, just like PMD. The fb-contrib FindBugs plugin doesn't have any security rules either. We did test Error Prone, and found that it does report some use of insecure ciphers (CWE-327), but that's it.

Commercial (non-Free) SAST Tools:

We are looking for results for other commercial SAST tools. If you have a license for any static analysis tool not already listed above and can run it on Benchmark and send us the results file that would be very helpful.

If you have a license for any commercial SAST tool, you can also run them against the Benchmark. Just put your results files in the /results folder of the project, and then run createScorecards.sh/.bat and it will generate a scorecard in the /scorecard directory for all the tool results you have that are currently supported.

Free Dynamic Application Security Testing (DAST) Tools (Both Open Source and Commercial):

Note: While we support scorecard generators for these Free and Commercial DAST tools, it can be difficult to get a full/clean run against the Benchmark. As such, some of these scorecard generators might need some additional work to properly reflect their results. If you notice any problems, let us know.

  • Arachni - .xml results file
    • To generate .xml, run: ./bin/arachni_reporter "Your_AFR_Results_Filename.afr" --reporter=xml:outfile=Benchmark1.2-Arachni.xml
  • Burp Suite Community Edition - .xml results file * To generate XML results: click on benchmark in site map so you see ALL findings in Issues pane. Then select ALL issues in Issues pane, right-click and select 'Report selected issues'. Select XML, then next, next, next, and save to file. To reduce size of results file, you can eliminate all the details, and not include requests/responses, which reduces the file size by 2/3rds.
  • ZAP - .json or .xml results file. To generate a complete ZAP results file so you can generate a valid scorecard, make sure you:
    • Tools > Options > Alerts - And set the Max alert instances to like 500.
    • Then: Report > Generate XML Report...
  • Wapiti - .json or .xml results file.

Commercial (non-Free) DAST Tools:

If you have access to other DAST Tools, PLEASE RUN THEM FOR US against the Benchmark, and send us the results file so we can build a scorecard generator for that tool.

Commercial Interactive Application Security Testing (IAST) Tools:

Commercial Hybrid Analysis Application Security Testing Tools:

WARNING: If you generate results for a commercial tool, be careful who you distribute it to. Each tool has its own license defining when any results it produces can be released/made public. It may be against the terms of a commercial tool's license to publicly release that tool's score against the OWASP Benchmark. The OWASP Benchmark project takes no responsibility if someone else releases such results.