Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Akka Dispatcher Blocked #170

Closed
joseluratm opened this issue Feb 28, 2019 · 4 comments
Closed

Akka Dispatcher Blocked #170

joseluratm opened this issue Feb 28, 2019 · 4 comments
Assignees
Labels
Milestone

Comments

@joseluratm
Copy link

Akka dispatcher blocked

Request Type

Feature Request

Work Environment

Question Answer
OS version (server) Debian
Package Type Docker

Problem Description

The other day I launched a lot of analyzers (about 1000), this morning I looked and many followed in "waiting".
I went to see the tasks that were running and I could see that there were 4 analyzers running. I killed these processes and then the akka dispatcher started to launch the rest, it was completely blocked.

As far as I can see, this could be because these analyzers did external queries and possibly in some of these requests the connection was lost. And if, in the requests is not specified a "timeout", these remain open, the script does not end and the dispatcher is blocked.

Steps to Reproduce

I have managed to reproduce the case in the following way, in case you want to try it:

  1. Run netcat listening in any port
  2. Create a analyzer that connect to the netcat
  3. Run a lot of analyzers

Possible Solutions

The solution I had to implement is the following:

requests.get(address, timeout=X)

Can see the documentation here: http://docs.python-requests.org/en/master/user/quickstart/#timeouts

For now, with this solution I'm not having problems, but I don't know if there would be another internal way to control this type of situation(from akka configuration or something like that).

Complementary information

https://doc.akka.io/docs/akka/2.5/dispatchers.html
https://github.com/TheHive-Project/Cortex/blob/e4ee899a2d2a2bedae4e139befe0082f6ca4e5a8/conf/reference.conf

@norgalades
Copy link

Hello... I had the same issue. I was running an analyzer that may take a looooot of time and I was wondering if, at a certain point, the job gets automatically killed.

@To-om
Copy link
Contributor

To-om commented Mar 4, 2019

If an analyzer never ends, it blocks Cortex. In order to solve this issue, I'll add a job timeout. If a job reaches it, the process of the analyzer will be killed.

@To-om To-om self-assigned this Mar 4, 2019
@To-om To-om added the bug label Mar 4, 2019
@joseluratm
Copy link
Author

Thanks for the response!!

@crackytsi
Copy link

@To-om If possible try to make it configurable. e.g. I sometimes have jobs that takes about 30 minutes to analyze. Maybe it also would be an option to configure the number of analyzers that could run in parallel.

@To-om To-om added this to the 3.0.0 milestone Mar 11, 2019
To-om added a commit that referenced this issue Mar 13, 2019
@To-om To-om closed this as completed Mar 13, 2019
nadouani added a commit that referenced this issue Apr 3, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

4 participants