Releases: epi052/feroxbuster
v1.12.3
- swapped emoji used when filtering status codes to be in line with the other filter emoji
@Flangyver strikes again!
v1.12.2
- fixed url parsing issue when word starts with 2 or more
/
's
🥳 Special thanks to @Sicks3c for the report that ID'd the issue 🥳
Fixed an issue where a word from the wordlist, that starts with 2 or more /
's, would fail to properly join with the given domain. When a wordlist contains words that begin with 2 forward slashes, i.e. //1_40_0/static/js
, the library used for making requests joins the word onto the base url in a surprising way:
base_url: https://localhost/
word: //1_40_0/static/js
result: https://1_40_0/static/js
This is due to the fact that //...
is a valid url.
Words with two or more /
's will be normalized down to 0 prepended /
's to allow for a proper join.
v1.12.1
- fixed regression in overall scan's progress bar (was finishing too early)
- broke filters.rs into a module containing multiple sub-modules (part of a long-term code restructure to increase maintainability/quality)
- 403 responses are now tested to see if they're likely to be a directory; if they are, a recursive scan is kicked off against the 403 directory (thanks to @N0ur5 for the suggestion 😁)
ex:
/admin/ => 403
/admin/login.php => 200
v1.12.0
- updated tokio to 1.0.1
- updated tokio-util to 0.6
- updated reqwest 0.11
- fixed link extraction to append a
/
char to all sub-directories, but leave the last part of the link (possibly a file) intact - updated
install-nix.sh
to skip font install if already present - 🥳 added interactive scan cancellation menu (been waiting on this one awhile) 🥳
v1.11.1
- updated crossterm to 0.19
- updated console to 0.14
- fixed CI pipeline clippy issues
- added statistics.rs
The statistics module tracks a slew of metrics about the scan. Below is a JSON representation to illustrate what gets tracked (produced with -o stuff --json
)
{
"type":"statistics",
"timeouts":0,
"requests":4661,
"expected_per_scan":4658,
"total_expected":4658,
"errors":0,
"successes":23,
"redirects":11,
"client_errors":4627,
"server_errors":0,
"total_scans":1,
"initial_targets":1,
"links_extracted":0,
"status_200s":23,
"status_301s":7,
"status_302s":4,
"status_401s":0,
"status_403s":3,
"status_429s":0,
"status_500s":0,
"status_503s":0,
"status_504s":0,
"status_508s":0,
"wildcards_filtered":0,
"responses_filtered":0,
"resources_discovered":35,
"url_format_errors":0,
"redirection_errors":0,
"connection_errors":0,
"request_errors":0,
"directory_scan_times":[
9.105557468
],
"total_runtime":[
9.739999046
]
}
v1.11.0
- added the ability to specify an example page for filtering pages that are similar to the given example
--filter-similar-to
requests the page passed to it via CLI, after which it hashes
the response body using the SSDeep algorithm. All subsequent
pages are hashed and compared to the original request's hash. If the comparison of the two hashes meets a certain
percentage of similarity (currently 95%), then that request will be filtered out.- SSDeep was selected as it does a good job of identifying near-duplicate pages once content-length reaches a certain
size, while remaining performant. Other algorithms were tested but resulted in huge performance hits (orders of
magnitude slower on requests/second).
v1.10.3
- fixed issue where only the first target passed via stdin would get processed
- added test that will catch any regressions
🎄 Special thanks to @SleepiPanda for the issue report! 🎉
v1.10.2
v1.10.1
v1.10.0
- added language in the README to clarify word/line filters, sparked from convo with @N0ur5
- added max total runtime option, i.e. enforceable time limit (
--time-limit
). - updated shell completion files to incorporate
--time-limit
option - added
time_limit
to example config
Special thanks to @mzpqnxow for the time limit suggestion 🙏