Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature Request] Allow disabling of execution segments when parallelism > 1 #90

Open
jdheyburn opened this issue Jan 14, 2022 · 6 comments

Comments

@jdheyburn
Copy link

Great operator, thanks for making it!

I would like some fine-grain control on the runners that are created. I noticed that when parallelism is > 1, the number of VUs is shared across all the workers. My number of VUs is somewhat dynamic, and I currently have runners quitting prematurely because there is no work to do.

It would be great if I could prevent this from happening.

@yorugac
Copy link
Collaborator

yorugac commented Jan 17, 2022

Hi @jdheyburn, thanks for opening the issue and PR!

I'm afraid I don't fully understand the problem here at the moment 😞
As common ground: the number of VUs is derived from the options of k6 script and although the exact number of VUs may be "dynamic" depending on executor, it has a max limit from the start. Parallelism > 1 results in distributing the test run equally among N runners and that is expected behavior. Depending on how specific iterations perform, some runners can exit before others which is also expected behavior.

Looking at your PR, it looks to me like you expect to have N runners, each with 100% of the test load -- is that correct? If so, from operator's viewpoint that is 1 test run, not N. The same can be achieved with parallelism: 1.

Could you please clarify a bit more?

@jdheyburn
Copy link
Author

So I'm moving my k6 workloads from EC2 to k8s. For me to be able to get a production equivalent load of traffic against my target (Redis) when using EC2 instances, I had to split the work across multiple instances with k6 on them (I have some helper scripts that allow me to do this). I was not able to get 1 k6 instance to replicate the load we had seen in production.

It is this workload which I am trying to get to run in the same manner on k8s, so when you say..

it looks to me like you expect to have N runners, each with 100% of the test load -- is that correct?

You are correct. My script.js has a number of configurations that can be made against it such as how much load one runner should do and what size of data to bootstrap into Redis. Granted that I could have potentially achieved the same with execution segments, I would like to, for now, achieve the same workload in k8s as I was doing in EC2. So that the test methodology is kept as identical as possible.

I hope that makes sense 🙂

@yorugac
Copy link
Collaborator

yorugac commented Jan 19, 2022

@jdheyburn sorry for the delay! Thanks, I think I started to understand your use case 😅 However, one more thing: do you have plans to switch to distributed execution with operator? TBH, right now it sounds to me as a problem of migration, not the end goal.

Also, a small detour about data. There is an ongoing related work here:
grafana/k6#1539
grafana/k6#2273
Related issue in k6-operator: #64

Once this work is done, I think it might help with the data splitting problem. (Also, in case you have related input from your use case or otherwise, please consider sharing in those issues!)

@jdheyburn
Copy link
Author

@yorugac No worries!

However, one more thing: do you have plans to switch to distributed execution with operator? TBH, right now it sounds to me as a problem of migration, not the end goal.

I do yes, but I am not sure when this would be. Currently I'm doing my tests off of my own docker image built from the attached PR, and I thought the extra config may be useful for some other people. The operator should have sensible defaults as it does, but it would be useful to make it flexible for others who are not following the defaults - sensibly 😄

Thanks for sharing the data splitting problem. I'm not entirely sure it can be applied to my testing but it'll give me something to think about.

@yorugac
Copy link
Collaborator

yorugac commented Jan 24, 2022

Hi @jdheyburn!
We have discussed your use case internally, among k6 team, and came to conclusion that we really need a bit more input from community members before accepting this proposal. Your use case is undoubtedly valid and interesting but it can also be solved in another way, without disabling segmentation but by modifying configuration of your tests / executors. There are several other reasons that we consider here:

  • arguably, the whole point of k6-operator is to help distribute execution
  • one of the coming features of k6-operator is support for --out cloud option but it'll not be usable with this new flag. This implies that we'll need to add a more complex spec validation, specific to k6, than before.
  • the plans to extend support for more distributed features in k6 will also likely similarly clash with this change

I personally consider your use case to be an issue primary of migration that we could definitely support better but there is more than one way to add support for migrations and no clarity yet on which is the best. Some additional input from community would definitely be very beneficial.

That said, please leave this issue open: k6 team would appreciate it greatly if other users of k6-operator were to add their inputs here or simply add a vote with 👍 / 👎
🙏

Thank you for your initiative and effort and I hope the response will not discourage you 🙂

@jdheyburn
Copy link
Author

Not a problem at all, I'll work off my fork until I can get my test script to work with this parameter. Thanks for taking the time to discuss it 😃

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants