-
Notifications
You must be signed in to change notification settings - Fork 130
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Insufficient cpu on DO #184
Comments
Hi thanks for reporting this. I haven't tried the DO Guide in a while will recheck when I get a chance. It's possible that the default VM sizes it's now using are not big enough anymore. You can try if adding more nodes or adding a second node pool with bigger VM (more cpu / memory) works. You should be able to do so via the DO Web Ui. |
I had a chat with the DO support team and they told me to do basically the same as you suggest and it worked so your prediction is probably right. I wish I understood the setup enough to be more useful with reporting or a fix, but it was a rush job to get it all setup for a training day and now I'm tearing it all back down to stop them charging me. I'm happy to run through the scripts again later though if you want and provide feedback as a novice user. There are definitely bits that could be expanded on, such as getting a certificate, that would help, but aren't really in your scope so I understand why you wouldn't want to cover them. |
In my experience, a machine size of doctl kubernetes cluster create --region=REGION CLUSTER_NAME --size=s-2vcpu-4gb |
I've run mine on a single node (
|
Would be cool to have the DO guide updated with a machine type which should work better than the default one. I think then we should be able to close this issue. |
That sounds good to me. I wish I could help, but I just guessed till it
worked.
…On Sat, 9 Dec 2023, 15:35 Jannik Hollenbach, ***@***.***> wrote:
Would be cool to have the DO guide updated with a machine type which
should work better than the default one.
I think then we should be able to close this issue.
—
Reply to this email directly, view it on GitHub
<#184 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAA4SWKQJKNNK6YEXLRU3Y3YISAL3AVCNFSM6AAAAAA5227DMOVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQNBYGQ2DCNRYG4>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
I mean if it works, it should probably also work for others 😅 So i guess it's can't be worse than the default one, if the default just doesn't cut it resource wise. |
Unfortunately I deleted it straight after the class.
I've just checked the invoice and it only shows the name, not the spec.
Might be able to reverse it, 80 hours cost $6.12.
…On Sat, 9 Dec 2023, 16:57 Jannik Hollenbach, ***@***.***> wrote:
I mean if it works, it should probably also work for others 😅
So i guess it's can't be worse than the default one, if the default just
doesn't cut it resource wise.
—
Reply to this email directly, view it on GitHub
<#184 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAA4SWLAU63RAUD6EWJU4XDYISKAZAVCNFSM6AAAAAA5227DMOVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQNBYGU3TQMRZGA>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
I'm following the setup instructions for DigitalOcean. I've got to step 2 and ran the
get pods
. The juice-balancer pod is stuck in the pending state.When I describe the pod I get this:
I know nothing about kubernetes or DO setup so I'm stalled here.
How do I allocate the extra resources so the provisioning can go ahead? I'll probably be hosting about 12 users with light load in case that makes a difference.
The text was updated successfully, but these errors were encountered: