Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FEATURE REQUEST] Dynamically tuning concurrency/connection limit or bailing after n connection failures #123

Closed
mzpqnxow opened this issue Nov 14, 2020 · 6 comments · Fixed by #127, #122, #132, #121 or #133
Labels
enhancement New feature or request has-PR pinned
Milestone

Comments

@mzpqnxow
Copy link

A common problem I run into is that some sites have issues when supporting too many concurrent connections, or too high a rate of connections- of course every site has its breaking point- but read my next paragraph for more detail on this. This can be mitigated by tuning feroxbuster with -t and -L for each individual target, of course.

The problem comes in when doing testing across a large amount of sites at once, using, e.g. GNU parallel. If you are performing testing against a medium or large organization with many websites, sometimes you'll need to batch a large set of commands due to testing time constraints, and it won't be practical to test and tune the -t and -L setting for each individual site, since they can vary quite a bit within a large set. Consider for this example a list of 1000 or more sites.

A nice feature would be to either:

  1. (Simple Solution) Simply bail out after n connection failures
  2. (Complex / Better Solution) Tune the threads and/or concurrent connections setting dynamically, based on the occurrence of connection failures

Some workarounds here:

  1. As I mentioned, manually testing each site and having a per-site -t and -L setting; this is prohibitively expensive in terms of time during a large-scale test
  2. Using a very conservative value across all sites; this is detrimental to the large amount of sites that can handle (in many cases) many multiples of that conservative setting, slowing the entire testing run of the entire batch down significantly

This may be beyond the scope of what you would like to implement and maintain within feroxbuster, but for me, it would be a very useful feature.

Curious what you think about this

Thanks, I appreciate your development on this tool. I haven't seen a public tool that performs as well as feroxbuster, with such flexibility and robust and advanced features since skipfish- which is no longer maintained and never really had a happy medium between "way too agressive" and "completely limited in its findings"

@epi052
Copy link
Owner

epi052 commented Nov 26, 2020

I'm not sure how this got closed, but reopening.

As always, thank you for your awesome requests and suggestions!

I like the idea of this, and I think once we get #107 merged (which allows us to cancel a scan), this ticket becomes much more manageable. Given how things are structured, I think the following will need to happen:

  • track errors (discussed here)
  • some configurable way of either opting-in or opting-out of auto-tuning (if opt-in, probably include a dial for number of errors or something)
  • pause scanning for a handful of seconds to allow requests already in flight to complete
  • limit concurrency in a way that makes sense
    • -t would require cancelling a directory and restarting it, whereas -L could just keep things paused until that directory is allowed to proceed. -L feels like the better solution, if it's effective. Would need some testing.

As I've walked myself through this, I don't think this issue is blocked by #107 necessarily. #107 only becomes a blocker of the -L solution doesn't really help/work. We won't really know if it's effective or not until it's attempted.

@epi052 epi052 reopened this Nov 26, 2020
@epi052 epi052 mentioned this issue Nov 26, 2020
13 tasks
@epi052
Copy link
Owner

epi052 commented Nov 26, 2020

... closed automatically again ...

@epi052 epi052 reopened this Nov 26, 2020
@epi052 epi052 mentioned this issue Nov 27, 2020
15 tasks
@epi052 epi052 reopened this Dec 4, 2020
@epi052 epi052 mentioned this issue Dec 4, 2020
13 tasks
@evi1r0s3 evi1r0s3 mentioned this issue Dec 5, 2020
13 tasks
@epi052 epi052 reopened this Dec 14, 2020
@evi1r0s3 evi1r0s3 mentioned this issue Dec 14, 2020
13 tasks
@epi052 epi052 mentioned this issue Dec 24, 2020
13 tasks
@epi052 epi052 added the has-PR label Dec 24, 2020
@epi052
Copy link
Owner

epi052 commented Jan 1, 2021

Update on this one: I've added metrics for most things I can think of. The table only shows non-zero values, the json contains everything that's tracked.

String output

──────────────────────────────────────────────────
 📊                Scan Summary                📊 
───────────────────────────┬──────────────────────
 Average Dir Scan          │      1.6992 secs     
 Longest Dir Scan          │      2.2181 secs     
 Shortest Dir Scan         │      1.0123 secs     
 Total Scan Time           │      3.2576 secs     
 403 Forbidden             │           2          
 Client Error Codes        │         1442         
 Errors                    │           2          
 Redirects                 │           6          
 Requests Expected per Dir │          206         
 Requests Sent             │         1451         
 Resources Discovered      │           9          
 Success Status Codes      │           3          
───────────────────────────┴──────────────────────

JSON output

{"type":"statistics","timeouts":0,"requests":1451,"expected_per_scan":206,"total_expected":1451,"errors":2,"successes":3,"redirects":6,"client_errors":1442,"server_errors":0,"total_scans":7,"links_extracted":0,"status_403s":2,"wildcards_filtered":0,"responses_filtered":0,"resources_discovered":9,"directory_scan_times":[1.631990827,1.603911018,1.558497619,1.684073758,1.012269852,2.218053672,2.185941124],"total_runtime":[0.0]}

Any other useful info you can think of @mzpqnxow ?

@epi052 epi052 removed the has-PR label Jan 5, 2021
@stale
Copy link

stale bot commented Jan 20, 2021

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot added the stale label Jan 20, 2021
@epi052 epi052 added pinned and removed stale labels Jan 21, 2021
@epi052
Copy link
Owner

epi052 commented Feb 2, 2021

Update: v2.0.0 will add --rate-limit that accepts a number representing the desired number of requests per second.

Not considering this closed, as automatically detecting and adjusting rate is still a goal, however, i think the correct mechanism is now in place to tweak the scan speed.

@epi052 epi052 added the has-PR label Feb 4, 2021
@epi052
Copy link
Owner

epi052 commented Feb 4, 2021

progress on this issue can be seen at #212

@epi052 epi052 mentioned this issue Feb 6, 2021
13 tasks
@epi052 epi052 added this to the 2.1.0 milestone Feb 14, 2021
@epi052 epi052 mentioned this issue Feb 17, 2021
13 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment