Skip to content
This repository has been archived by the owner on Dec 15, 2021. It is now read-only.

spanName showing up as 'unknown' in New Relic when property exists #15

Open
cmanzi opened this issue Feb 12, 2020 · 3 comments
Open

spanName showing up as 'unknown' in New Relic when property exists #15

cmanzi opened this issue Feb 12, 2020 · 3 comments

Comments

@cmanzi
Copy link

cmanzi commented Feb 12, 2020

Describe the bug
If I understand the tracespan template correctly, the following property should determine the 'name' of the trace span that shows up in New Relic:

...
spanName: destination.workload.name | destination.service.name | "unknown"
...

This appears to be the case for most of my spans. I am running a multicluster setup, and I have spans that cross cluster boundaries. So, the 'destination' for the initial request in these spans is actually a ServiceEntry.

If I search for spans by the desired destination name (productpage.bookinfo.mvp.global in this case, I can see all of the unique spans, and they appear to be correct. However, the 'name' that they are assigned is 'unknown'. This is a bit puzzling, because according to the tags that are populated, I would think that the spanName would have defaulted to destination.service.name, as destination.workload.name is unknown:

...
destination.ip
10.61.24.129
destination.name
unknown
destination.owner
unknown
destination.port
15443
destination.service.name
productpage.bookinfo.mvp.global
destination.service.namespace
generator
destination.workload.name
unknown
...

It may be the case that I just don't understand how exactly New Relic determines what the name of a span should be.

To Reproduce
Steps to reproduce the behavior:

  1. Set up Istio in a cluster (1.4.3) with the newrelic adapter
  2. Create a ServiceEntry for a MESH_INTERNAL service (from what I understand about this issue, I don't think this needs to be done in a multi-cluster setup). This is the ServiceEntry I am using:
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
  name: productpage-mvp
spec:
  hosts:
  # must be of form name.namespace.global
  - productpage.bookinfo.mvp.global
  location: MESH_INTERNAL
  ports:
  - name: http1
    number: 9080
    protocol: http
  resolution: DNS
  addresses:
  - 240.0.100.1
  endpoints:
  - address: mvp.<internal_domain>
    ports:
      http1: 15443 # Do not change this port value
  1. Generate requests to the service entry and observe spans that populate in New Relic.

Expected behavior
Span names are correctly set to the destination.service.name property.

Version information (please complete the following information):

  • newrelic-istio-adapter: 2.0.1
  • Kubernetes: 1.15.7
  • Istio: 1.4.3

Additional context
Add any other context about the problem here.

@MrAlias
Copy link
Contributor

MrAlias commented Feb 21, 2020

You bring up an interesting point. In the rule you mentioned:

spanName: destination.workload.name | destination.service.name | "unknown"

destination.service.name becomes the default if destination.workload.name is undefined. From the output though, destination.workload.name isn't undefined, but rather it is defined as "unknown".

This definition of destination.workload.name is specified here. Changing that line to the following should (I haven't verified yet) cause the spanName to fallthrough and default to the destination.service.name:

destination.workload.name: destination.workload.name | ""

If you have time to test, I would love to know if this helps.

I'm thinking that might need to become the default (or, something similar).

@cmanzi
Copy link
Author

cmanzi commented Feb 24, 2020

@MrAlias Ok so I did some testing, none of which fixed the 'unknown' name issue:

  • As recommended - destination.workload.name: destination.workload.name | ""
  • In addition, tried also adding - destination.name: destination.workload.name | destination.service.name | "unknown"
  • In addition, tried modifying - destinationName: destination.workload.name | destination.service.name | "unknown"
  • Finally, just tried - spanName: destination.service.name | "unknown"

Furthermore, I tried enabling debug logging on mixer to see if it would show me what it was sending to the NR handler. It does not, but it does show the template that it builds for the tracespans it sends:

&Instance{Name:,Template:tracespan,Params:&types.Struct{Fields:map[string]*Value{apiProtocol: &Value{Kind:&Value_StringValue{StringValue:api.protocol | "",},XXX_unrecognized:[],},clientSpan: &Value{Kind:&Value_StringValue{StringValue:(context.reporter.kind | "inbound") == "outbound",},XXX_unrecognized:[],},destinationIp: &Value{Kind:&Value_StringValue{StringValue:destination.ip | ip("0.0.0.0"),},XXX_unrecognized:[],},destinationName: &Value{Kind:&Value_StringValue{StringValue:destination.workload.name | "unknown",},XXX_unrecognized:[],},endTime: &Value{Kind:&Value_StringValue{StringValue:response.time,},XXX_unrecognized:[],},httpStatusCode: &Value{Kind:&Value_StringValue{StringValue:response.code | 0,},XXX_unrecognized:[],},parentSpanId: &Value{Kind:&Value_StringValue{StringValue:request.headers["x-b3-parentspanid"] | "",},XXX_unrecognized:[],},requestSize: &Value{Kind:&Value_StringValue{StringValue:request.size | 0,},XXX_unrecognized:[],},requestTotalSize: &Value{Kind:&Value_StringValue{StringValue:request.total_size | 0,},XXX_unrecognized:[],},responseSize: &Value{Kind:&Value_StringValue{StringValue:response.size | 0,},XXX_unrecognized:[],},responseTotalSize: &Value{Kind:&Value_StringValue{StringValue:response.total_size | 0,},XXX_unrecognized:[],},rewriteClientSpanId: &Value{Kind:&Value_StringValue{StringValue:false,},XXX_unrecognized:[],},sourceIp: &Value{Kind:&Value_StringValue{StringValue:source.ip | ip("0.0.0.0"),},XXX_unrecognized:[],},sourceName: &Value{Kind:&Value_StringValue{StringValue:source.workload.name | "unknown",},XXX_unrecognized:[],},spanId: &Value{Kind:&Value_StringValue{StringValue:request.headers["x-b3-spanid"] | "",},XXX_unrecognized:[],},spanName: &Value{Kind:&Value_StringValue{StringValue:destination.workload.name | destination.service.name | "unknown",},XXX_unrecognized:[],},spanTags: &Value{Kind:&Value_StructValue{StructValue:&Struct{Fields:map[string]*Value{api.name: &Value{Kind:&Value_StringValue{StringValue:api.service | "unknown",},XXX_unrecognized:[],},api.version: &Value{Kind:&Value_StringValue{StringValue:api.version | "unknown",},XXX_unrecognized:[],},connection.securityPolicy: &Value{Kind:&Value_StringValue{StringValue:conditional((context.reporter.kind | "inbound") == "outbound", "unknown", conditional(connection.mtls | false, "mutual_tls", "none")),},XXX_unrecognized:[],},destination.owner: &Value{Kind:&Value_StringValue{StringValue:destination.owner | "unknown",},XXX_unrecognized:[],},destination.port: &Value{Kind:&Value_StringValue{StringValue:destination.port | 0,},XXX_unrecognized:[],},destination.service.name: &Value{Kind:&Value_StringValue{StringValue:destination.service.name | "unknown",},XXX_unrecognized:[],},destination.service.namespace: &Value{Kind:&Value_StringValue{StringValue:destination.service.namespace | "unknown",},XXX_unrecognized:[],},destination.workload.name: &Value{Kind:&Value_StringValue{StringValue:destination.workload.name | "",},XXX_unrecognized:[],},destination.workload.namespace: &Value{Kind:&Value_StringValue{StringValue:destination.workload.namespace | "unknown",},XXX_unrecognized:[],},request.auth.principal: &Value{Kind:&Value_StringValue{StringValue:request.auth.principal | "unknown",},XXX_unrecognized:[],},request.host: &Value{Kind:&Value_StringValue{StringValue:request.host | "unknown",},XXX_unrecognized:[],},request.operation: &Value{Kind:&Value_StringValue{StringValue:conditional((context.protocol | "unknown") == "grpc", request.path | "unknown", request.method | "unknown"),},XXX_unrecognized:[],},request.path: &Value{Kind:&Value_StringValue{StringValue:request.path | "",},XXX_unrecognized:[],},request.protocol: &Value{Kind:&Value_StringValue{StringValue:context.protocol | "unknown",},XXX_unrecognized:[],},source.owner: &Value{Kind:&Value_StringValue{StringValue:source.owner | "unknown",},XXX_unrecognized:[],},source.workload.name: &Value{Kind:&Value_StringValue{StringValue:source.workload.name | "unknown",},XXX_unrecognized:[],},source.workload.namespace: &Value{Kind:&Value_StringValue{StringValue:source.workload.namespace | "unknown",},XXX_unrecognized:[],},},XXX_unrecognized:[],},},XXX_unrecognized:[],},startTime: &Value{Kind:&Value_StringValue{StringValue:request.time,},XXX_unrecognized:[],},traceId: &Value{Kind:&Value_StringValue{StringValue:request.headers["x-b3-traceid"] | "",},XXX_unrecognized:[],},},XXX_unrecognized:[],},AttributeBindings:map[string]string{},CompiledTemplate:,}

This appears to be correct.

I suspect that Mixer is not sending the correct SpanName, for some reason. However, with no way to see the traces going out of mixer/coming into the NR adapter, I can't verify this. If we had debug/trace logs that output the reports from Mixer, we could probably verify that.

*Edit: Output from adapter showing what it is posting to NR:

2020-02-24T16:05:56.297723Z	debug	newrelic	{"data":[{"common":{"attributes":{"cluster.name":"arwen.dev-test.locusdev.net"}},"spans":[{"id":"bfa5c866e49e1898","trace.id":"90467a8698114e17bfa5c866e49e1898","timestamp":1582560350966,"attributes":{"name":"unknown","duration.ms":16.691000000000003,"service.name":"traffic-generator","request.operation":"GET","destination.ip":"<ip>","source.workload.name":"traffic-generator","connection.securityPolicy":"unknown","api.version":"unknown","destination.service.name":"productpage.bookinfo.mvp.global","request.host":"productpage.bookinfo.mvp.global:9080","request.totalSize":859,"destination.port":15443,"rewriteClientSpanId":false,"source.name":"traffic-generator","request.path":"/productpage","response.totalSize":4322,"source.ip":"<ip>","request.size":0,"source.workload.namespace":"generator","api.name":"unknown","destination.workload.namespace":"unknown","response.code":200,"response.size":4183,"source.owner":"kubernetes://apis/apps/v1/namespaces/generator/deployments/traffic-generator","destination.owner":"unknown","destination.workload.name":"unknown","request.protocol":"http","request.auth.principal":"unknown","destination.service.namespace":"generator","clientSpan":true,"destination.name":"unknown","api.protocol":""}}]}],"event":"uncompressed request body","url":"https://trace-api.newrelic.com/trace/v1"}

Given (https://github.com/newrelic/newrelic-istio-adapter/blob/master/trace/handler.go#L86) - it seems almost certain that mixer is not sending the spanName, which is puzzling given that it sends destination.service.name correctly.

@cmanzi
Copy link
Author

cmanzi commented Feb 24, 2020

Opened issue in Istio to see if they can give any insight: istio/istio#21411

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants