Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Change CosmosDB max throughput #1487

Conversation

heavylemon1
Copy link
Contributor

This solves issue #1313 for reducing CosmosDB throughput from 4000 to 400.

I also changed the MAX_CONTAINER_THROUGHPUT to 400 since the MAX_DATABASE_THROUGHPUT will be 400.( please do let me know if I understood this correctly).

Copy link
Member

@javiertoledo javiertoledo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Your change is correct, but I left a comment in case you have the time to tweak it a bit more. Also, in order to merge this, you'll need to run the command rush change. This will prompt you about the changes and let you pick the right version bump required for the change. In this case, there are no API changes, so a fix change should suffice.

Comment on lines 1 to 2
export const MAX_CONTAINER_THROUGHPUT = 400
export const MAX_DATABASE_THROUGHPUT = 400
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @heavylemon1, first of all, thanks for contributing, it really helps!

These parameters are a bit sensible, and it would be a good idea to allow tweaking them. As a quick-n-dirty solution, we could just try loading them from environment variables like this:

Suggested change
export const MAX_CONTAINER_THROUGHPUT = 400
export const MAX_DATABASE_THROUGHPUT = 400
export const MAX_CONTAINER_THROUGHPUT = process.env.AZURE_MAX_CONTAINER_THROUGHPUT ?? 400
export const MAX_DATABASE_THROUGHPUT = process.env.AZURE_MAX_DATABASE_THROUGHPUT ?? 400

If it makes sense to you, could you please add a note to the documentation to make these configuration options visible? I'd say this is the right place: https://docs.boosterframework.com/going-deeper/infrastructure-providers/#azure-provider-configuration

You can find the documentation in the website folder in the same Booster repository.

Thanks!

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Got it @javiertoledo , I will be changing those as suggested and was also thinking of maybe using parseInt since CosmosDB Throughput is based on read units which are integers from my understanding (so to ensure they will be as expected).

So, I will first change the implementation to process env,
update the documentation and
run rush change.

Thanks!

…hput to use process env and added documentation
@heavylemon1
Copy link
Contributor Author

Hello @javiertoledo !

I added the improvements you suggested in the comments.

Regarding the process.env, I think that the value provided there, regardless will be received as string by the node process.env, so i parsed it to integer or if it's undefined used a fall back to 400.

I also added the file generated by rush change.
Let me know if everything looks good or you have any suggestions.

@heavylemon1
Copy link
Contributor Author

Hi @javiertoledo ,
Just wanted to check in regarding this pull request that I opened a while ago.
I'm new to open source contributing so I might be missing something or doing something incorrectly.
If that's the case please don't hesitate to let me know.

Thanks!

Copy link
Member

@javiertoledo javiertoledo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @heavylemon1, you did everything correctly, sorry for the delay. We've been busy with other things. I'll start the test and merge process now. Thanks for taking care of the suggestions and handling the potential errors. Using parseInt to make sure the parameter is correct is a good idea!

@javiertoledo
Copy link
Member

/integration sha=691ea7d7e14029af46c00b2cad611702f9ec37d5

Copy link
Contributor

⌛ Integration tests are running...

Check their status here 👈

Copy link
Contributor

❌ Oh no! Integration tests have failed

1 similar comment
Copy link
Contributor

❌ Oh no! Integration tests have failed

@heavylemon1
Copy link
Contributor Author

Hi @javiertoledo!
Thanks for the response!
I just checked the integration tests, and I noticed that it failed on this specific part:

Error: autoscale_settings.0.max_throughput must be a minimum of 1000

After checking on the web for this, I came through this FAQ from Microsoft Azure regarding this specific thing
https://learn.microsoft.com/en-us/azure/cosmos-db/autoscale-faq#what-s-the-entry-point-ru-s-for-autoscale-

I think that to fix this issue, we should change the minimum to 1000 (which is minimum RU's for autoscale) instead of 400 or if we have a way of knowing if the RU's are provisioned then have the 400 RU's value.

@gonzalogarciajaubert
Copy link
Collaborator

/integration sha=691ea7d7e14029af46c00b2cad611702f9ec37d5

Copy link
Contributor

⌛ Integration tests are running...

Check their status here 👈

Copy link
Contributor

❌ Oh no! Integration tests have failed

@gonzalogarciajaubert gonzalogarciajaubert merged commit 4d79591 into boostercloud:main Feb 15, 2024
5 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants