-
Notifications
You must be signed in to change notification settings - Fork 104
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Internal server error when location field not set #789
Comments
I just encountered this with some generic node metrics as well, which is interesting because they were fine previously. I was getting this error over and over again until I added the location resource attribute.
|
Are you explicitly setting cloud.availability_zone to ""? We should be falling back to "global" if none is set: https://github.com/GoogleCloudPlatform/opentelemetry-operations-go/blob/main/internal/resourcemapping/resourcemapping.go#L155 |
This might be a regression |
I don't think it is a regression, since I don't think the fallbacks in question have been changed. But I do think it is best to fall back to global where it makes sense. I opened #795 |
Sorry for the late reply, no we do not set cloud.availability_zone to "". If we do set it, we use a valid zone. #795 sounds good to me, I think that would be helpful. |
The error message should be improved to actually indicate that the location is the problem. That change should roll out in the next week or two. But i've talked with some folks from cloud monitoring, and they don't think falling back to global is a good idea. It isn't actually a global location, which is misleading, and can make users think the data is replicated globally, which it isn't. It is strongly recommended that users pick a location that is at least reasonably close to them so it doesn't cause problems on the query side of things. If you really want to use global, which I don't recommend, you can hard-code it using the resource processor: processors:
resource/defaultglobal:
attributes:
- key: cloud.availability_zone
value: "global"
action: upsert
- key: cloud.region
value: "global"
action: upsert Closing as won't fix. Feel free to reopen if you have other questions. |
This is a follow up for #760. The issue with logs being silently dropped has been resolved and has exposed the real issue I am encountering.
When sending a
k8s_container
metric to Cloud Monitoring, I get an "internal server error". It is resolved by settingcloud.availability_zone
(or the region resource attribute).With logging, I can get by without setting the location field. It seems that this should be the case with metrics as well. My cluster is not part of a cloud and has no reasonable region / zone value. This is also true for the GCP customers that we (observiq) support. They have on premise clusters.
This is the error text, formatted to be easily readable:
The error's metric descriptor does make it clear that the location field is unset.
This is the log from the collector (will the error value removed.
Exporter config:
The text was updated successfully, but these errors were encountered: