-
-
Notifications
You must be signed in to change notification settings - Fork 502
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[FEATURE REQUEST] Dynamically tuning concurrency/connection limit or bailing after n connection failures #123
Comments
I'm not sure how this got closed, but reopening. As always, thank you for your awesome requests and suggestions! I like the idea of this, and I think once we get #107 merged (which allows us to cancel a scan), this ticket becomes much more manageable. Given how things are structured, I think the following will need to happen:
As I've walked myself through this, I don't think this issue is blocked by #107 necessarily. #107 only becomes a blocker of the |
... closed automatically again ... |
Update on this one: I've added metrics for most things I can think of. The table only shows non-zero values, the json contains everything that's tracked. String output
JSON output {"type":"statistics","timeouts":0,"requests":1451,"expected_per_scan":206,"total_expected":1451,"errors":2,"successes":3,"redirects":6,"client_errors":1442,"server_errors":0,"total_scans":7,"links_extracted":0,"status_403s":2,"wildcards_filtered":0,"responses_filtered":0,"resources_discovered":9,"directory_scan_times":[1.631990827,1.603911018,1.558497619,1.684073758,1.012269852,2.218053672,2.185941124],"total_runtime":[0.0]} Any other useful info you can think of @mzpqnxow ? |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
Update: v2.0.0 will add Not considering this closed, as automatically detecting and adjusting rate is still a goal, however, i think the correct mechanism is now in place to tweak the scan speed. |
progress on this issue can be seen at #212 |
A common problem I run into is that some sites have issues when supporting too many concurrent connections, or too high a rate of connections- of course every site has its breaking point- but read my next paragraph for more detail on this. This can be mitigated by tuning feroxbuster with
-t
and-L
for each individual target, of course.The problem comes in when doing testing across a large amount of sites at once, using, e.g. GNU parallel. If you are performing testing against a medium or large organization with many websites, sometimes you'll need to batch a large set of commands due to testing time constraints, and it won't be practical to test and tune the
-t
and-L
setting for each individual site, since they can vary quite a bit within a large set. Consider for this example a list of 1000 or more sites.A nice feature would be to either:
Some workarounds here:
-t
and-L
setting; this is prohibitively expensive in terms of time during a large-scale testThis may be beyond the scope of what you would like to implement and maintain within feroxbuster, but for me, it would be a very useful feature.
Curious what you think about this
Thanks, I appreciate your development on this tool. I haven't seen a public tool that performs as well as feroxbuster, with such flexibility and robust and advanced features since skipfish- which is no longer maintained and never really had a happy medium between "way too agressive" and "completely limited in its findings"
The text was updated successfully, but these errors were encountered: