- Prerequisites
- Set Up a Sender
- Send Data to Wavefront
- Close the Sender
- License
- How to Get Support and Contribute
Wavefront by VMware Go SDK lets you send raw data from your Go application to Wavefront using a Sender
interface. The data is then stored as metrics, histograms, and trace data. This SDK is also called the Wavefront Sender SDK for Go.
Although this library is mostly used by the other Wavefront Go SDKs to send data to Wavefront, you can also use this SDK directly. For example, you can send data directly from a data store or CSV file to Wavefront.
Before you start implementing, let us make sure you are using the correct SDK!
Note:
- This is the Wavefront by VMware SDK for Go (Wavefront Sender SDK for Go)! If this SDK is not what you were looking for, see the table below.
SDK Type | SDK Description | Supported Languages |
---|---|---|
OpenTracing SDK | Implements the OpenTracing specification. Lets you define, collect, and report custom trace data from any part of your application code. Automatically derives Rate Errors Duration (RED) metrics from the reported spans. |
|
Metrics SDK | Implements a standard metrics library. Lets you define, collect, and report custom business metrics and histograms from any part of your application code. |
|
Framework SDK | Reports predefined traces, metrics, and histograms from the APIs of a supported app framework. Lets you get started quickly with minimal code changes. |
|
Sender SDK | Lets you send raw data to Wavefront for storage as metrics, histograms, or traces, e.g., to import CSV data into Wavefront. |
|
- Go 1.9 or higher.
- Import the
senders
package.import ( wavefront "github.com/wavefronthq/wavefront-sdk-go/senders" )
You can send metrics, histograms, or trace data from your application to the Wavefront service using a Wavefront proxy or direct ingestions.
-
Option 1: Use a Wavefront proxy, which then forwards the data to the Wavefront service. This is the recommended choice for a large-scale deployment that needs resilience to internet outages, control over data queuing and filtering, and more. Create a ProxyConfiguration to send data to a Wavefront proxy.
-
Use direct ingestion to send the data directly to the Wavefront service. This is the simplest way to get up and running quickly. Create a DirectConfiguration to send data directly to a Wavefront service.
Depending on the data you wish to send to Wavefront (metrics, distributions (histograms) and/or spans), enable the relevant ports on the proxy and initialize the proxy sender.
import (
wavefront "github.com/wavefronthq/wavefront-sdk-go/senders"
)
func main() {
proxyCfg := &wavefront.ProxyConfiguration {
Host : "proxyHostname or proxyIPAddress",
// At least one port should be set below.
MetricsPort : 2878, // set this (typically 2878) to send metrics
DistributionPort : 2878, // set this (typically 2878) to send distributions
TracingPort : 30000, // set this to send tracing spans. the same port as the customTracingListenerPorts configured in the wavefront proxy
FlushIntervalSeconds: 10, // flush the buffer periodically, defaults to 5 seconds.
}
sender, err := wavefront.NewProxySender(proxyCfg)
if err != nil {
// handle error
}
// send data (see below for usage)
time.Sleep(5 * time.Second)
sender.Flush()
sender.Close()
}
import (
wavefront "github.com/wavefronthq/wavefront-sdk-go/senders"
)
func main() {
directCfg := &wavefront.DirectConfiguration {
Server : "https://INSTANCE.wavefront.com", // your Wavefront instance URL
Token : "YOUR_API_TOKEN", // API token with direct ingestion permission
// Optional configuration properties. Default values should suffice for most use cases.
// override the defaults only if you wish to set higher values.
// max batch of data sent per flush interval. defaults to 10,000.
// recommended not to exceed 40,000.
BatchSize : 10000,
// size of internal buffer beyond which received data is dropped.
// helps with handling brief increases in data and buffering on errors.
// separate buffers are maintained per data type (metrics, spans and distributions)
// defaults to 500,000. higher values could use more memory.
MaxBufferSize : 500000,
// interval (in seconds) at which to flush data to Wavefront. defaults to 1 Second.
// together with batch size controls the max theoretical throughput of the sender.
FlushIntervalSeconds : 1,
}
sender, err := wavefront.NewDirectSender(directCfg)
if err != nil {
// handle error
}
// send data (see below for usage)
time.Sleep(5 * time.Second)
sender.Flush()
sender.Close()
}
Wavefront supports different metric types, such as gauges, counters, delta counters, histograms, traces, and spans. See Metrics for details. To send data to Wavefront using Sender
you need to instantiate the following:
// Wavefront metrics data format
// <metricName> <metricValue> [<timestamp>] source=<source> [pointTags]
// Example: "new-york.power.usage 42422 1533529977 source=localhost datacenter=dc1"
sender.SendMetric("new-york.power.usage", 42422.0, 0, "go_test", map[string]string{"env" : "test"})
// Wavefront delta counter format
// <metricName> <metricValue> source=<source> [pointTags]
// Example: "lambda.thumbnail.generate 10 source=thumbnail_service image-format=jpeg"
sender.SendDeltaCounter("lambda.thumbnail.generate", 10.0, "thumbnail_service", map[string]string{"format" : "jpeg"})
Note: If your metricName
has a bad character, that character is replaced with a -
.
import "github.com/wavefronthq/wavefront-sdk-go/histogram"
// Wavefront Histogram data format
// {!M | !H | !D} [<timestamp>] #<count> <mean> [centroids] <histogramName> source=<source> [pointTags]
// Example: You can choose to send to at most 3 bins - Minute/Hour/Day
// "!M 1533529977 #20 30.0 #10 5.1 request.latency source=appServer1 region=us-west"
// "!H 1533529977 #20 30.0 #10 5.1 request.latency source=appServer1 region=us-west"
// "!D 1533529977 #20 30.0 #10 5.1 request.latency source=appServer1 region=us-west"
centroids := []histogram.Centroid {
{
Value : 30.0,
Count : 20,
},
{
Value : 5.1,
Count : 10,
},
}
hgs := map[histogram.Granularity]bool {
histogram.MINUTE : true,
histogram.HOUR : true,
histogram.DAY : true,
}
sender.SendDistribution("request.latency", centroids, hgs, 0, "appServer1", map[string]string {"region" : "us-west"})
When you use a Sender SDK, you won’t see span-level RED metrics by default unless you use the Wavefront proxy and define a custom tracing port (TracingPort
). See Instrument Your Application with Wavefront Sender SDKs for details.
// Wavefront Tracing Span Data format
// <tracingSpanName> source=<source> [pointTags] <start_millis> <duration_milliseconds>
// Example:
// "getAllUsers source=localhost traceId=7b3bf470-9456-11e8-9eb6-529269fb1459
// spanId=0313bafe-9457-11e8-9eb6-529269fb1459 parent=2f64e538-9457-11e8-9eb6-529269fb1459
// application=Wavefront http.method=GET 1552949776000 343"
sender.SendSpan("getAllUsers", 1552949776000, 343, "localhost",
"7b3bf470-9456-11e8-9eb6-529269fb1459",
"0313bafe-9457-11e8-9eb6-529269fb1459",
[]string {"2f64e538-9457-11e8-9eb6-529269fb1459"},
nil,
[]SpanTag {
{Key : "application", Value : "Wavefront"},
{Key : "service", Value : "istio"}
{Key : "http.method", Value : "GET"},
},
nil)
Note: The tracing and span SDK APIs are designed to serve as low-level endpoints. For most use cases, we recommend using
the OpenTracing SDK with the WavefrontTracer
.
- See the Go OpenTracing project for details.
- To use OpenTracing with Wavefront, see the Wavefront Go OpenTracing SDK.
Before shutting down your application, flush the buffer and close the sender.
// failures observed while sending metrics/histograms/spans, can be obtained as follows:
totalFailures := sender.GetFailureCount()
// on-demand buffer flush
sender.Flush()
// close the sender before shutting down your application
sender.Close()
- Reach out to us on our public Slack channel.
- If you run into any issues, let us know by creating a GitHub issue.