Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Memory Usage is too high when using Progress(...) #117

Closed
gnuletik opened this issue Jul 10, 2023 · 9 comments
Closed

Memory Usage is too high when using Progress(...) #117

gnuletik opened this issue Jul 10, 2023 · 9 comments
Assignees
Labels
bug Something isn't working research Research further urgent

Comments

@gnuletik
Copy link

gnuletik commented Jul 10, 2023

When running nmap with service info and progression, the memory usage is above 1GB in around 10 minutes. Which seems quite high.

	scanner, err := nmap.NewScanner(
		ctx,
		nmap.WithTargets(target),
		nmap.WithPorts("0-6000"),
		nmap.WithServiceInfo(),
	)
	if err != nil {
		return fmt.Errorf("nmap.NewScanner: %w", err)
	}

	progress := make(chan float32)

	result, warnings, err := scanner.Progress(progress).Run()

I see multiple way to reduce the memory usage:

  • Using the ToFile function to store the nmap XML result to a file, as suggested in Fix run with progress and add new Run function #69. However, this makes the Progress value to always be 0.
  • Makes the progress report duration configurable. It is currently hard-coded to 100ms:

    nmap/nmap.go

    Line 77 in a750324

    s.args = append(s.args, "--stats-every", "100ms")

    nmap/nmap.go

    Line 173 in a750324

    time.Sleep(time.Millisecond * 100)
    But I guess that this solution would not properly fix the issue.
  • Avoid storing all the TaskProgress values and only keep the last one.

Do you see other possible fixes?

Thanks!

@Ullaakut
Copy link
Owner

I don't think the gigabyte of used RAM is due to the task progress slice, or to the output kept in memory. The nmap outputs amounts to maybe a few megabytes on a very large scan, and 10 minutes of progress reports every 100ms would mean 6000 structs that are each a few bytes, so likely only takes a few kilobytes.

The ram usage most likely comes from the dependencies of this library and Go's policy for garbage collection. 1GB usage doesn't necessarily mean that the program is using 1GB right now, it only means that it reserved 1GB (probably because the total allocations amount to 1GB) but if the system tries to get some of that memory back, the garbage collector will free what's not in use.

I could do a proper benchmark if this becomes a real issue, but so far from my testing the consumption seems to be nothing out of the ordinary.

@Ullaakut Ullaakut self-assigned this Jul 17, 2023
@Ullaakut Ullaakut added the question Further information is requested label Jul 17, 2023
@gnuletik
Copy link
Author

Thanks for the feedback!

When removing the scanner.Progress(...) configuration, the memory usage is kept below a 100MB after 10 minutes, so that's why I thought that this is linked to the task progress slice.

About the Go's policy for garbage collection: the issue occurred in a container / Kubernetes environment with a memory request and limit set to 1GB. During multiple runs, the container was OOMKilled, which means that the real memory used was above 1GB.

I'll try to use the GOMEMLIMIT env variable to set configure the GC.

@Ullaakut
Copy link
Owner

Ah, that is interesting 🤔 There is then indeed an issue with the progress mode. We need to look into it! Will update the labels accordingly.

@Ullaakut Ullaakut added bug Something isn't working research Research further and removed question Further information is requested labels Jul 17, 2023
@Ullaakut
Copy link
Owner

TODO, in case someone else picks it up:

  • Run memory usage benchmarks with and without Progress enabled
  • Compare benchmarks to find source of memory hunger
  • Fix it

@gnuletik
Copy link
Author

I made more tests with nmap.NewScanner(ctx, nmap.WithPorts("-"), nmap.WithServiceInfo())

I run a first scan without nmap.Progress(...).

Here is the memory usage:

Screenshot 2023-07-17 at 16 58 42

Then I run another container with:

  • nmap.Progress(...).
  • Memory request & limit on the kubernetes pod: 2Gi
  • Environment variable GOMEMLIMIT=1932735283 (90% of the memory request/limit)

Here is the memory usage:

Screenshot 2023-07-17 at 17 01 07

The container ended with a OOMKilled error from Kubernetes.

@Ullaakut
Copy link
Owner

Ullaakut commented Jul 24, 2023

This looks like a serious issue indeed. Thanks for the details! I'll try to find some time to fix this.

@llwq123456
Copy link

@Ullaakut hello, Are SIGUSR1 and SIGUSR2 signals used in this project?

@Ullaakut
Copy link
Owner

@llwq123456 No

@gnuletik
Copy link
Author

https://github.com/Ullaakut/nmap/releases/tag/v3.0.4 seems to have fixed the issue!

Screenshot 2024-10-15 at 00 24 30 Screenshot 2024-10-15 at 00 24 54

Thanks 🔥

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working research Research further urgent
Projects
None yet
Development

No branches or pull requests

3 participants