James.Diagnostics is a convention-based library built on top of Magnum to help you to add custom performance counters to your existing applications. It is based on the assumption that in most cases you will want to instrument your application logic for:
- Successes
- Failures
- Average Execution Times
If you need other types of custom counters, libraries such as Magnum, written by Chris Patterson and Dru Sellers, can help you.
From the package manager console in Visual Studio, you can enter the following:
> install-package James.Diagnostics
Let's say you have a service that allows clients to get and update customer information.
public IEnumerable GetCustomers()
{
return _repository.GetAll();
}
public void UpdateCustomer(Customer customer)
{
_repository.Update(customer);
}
Creating custom performance counters for these two methods is as simple as inheriting from MonitorableCounterCategory as shown below.
public class CustomerService_GetCustomersCounters
: MonitorableCounterCategory
{
}
public class CustomerService_UpdateCustomerCounters
: MonitorableCounterCategory
{
}
Once you have these in place, you can begin to use those to Monitor()
your methods.
public IEnumerable GetCustomers()
{
return Monitoring<CustomerService_GetCustomerCounters>.Monitor(() => _repository.GetAll());
}
public void UpdateCustomer(Customer customer)
{
Monitoring<CustomerService_UpdateCustomerCounters>.Monitor(() => _repository.Update(customer));
}
Below is a sample screenshot of Performance Monitor showing one of these performance counter categories in action. In the graph there are three colored lines representing the three counters.
- Blue Line - represents the execution times in milliseconds.
- Green Line - represents the number of successes.
- Red Line - represents the number of failures.
If these metrics are brought into tools such as SCOM, they can be further manipulated to derive computed metrics such as the percentage of successes or average execution times aggregated by time periods.
There are multiple options for monitoring the execution of your synchronous code with custom performance counters.
Takes an action and increments the success/failure counters depending on whether or not the action succeeds (ie. - does not throw). If an exception does occur, it will bubble this up.
Takes a function and increments the success/failure counters depending on whether or not the action succeeds (ie. -does not throw). If an exception does occur, it will bubble this up. If it succeeds, it will also return the result of the function call.
Takes a function that returns a true/false and increments the success/failure counters depending on whether or not the function returns a true or false. It will not return the boolean value to the client, however. So, only use this method if you are instrumenting something that gracefully handles exceptions by not bubbling them up.
Sometimes, the distance between beginning and ending your monitoring is not just in time but also by process or machine. This often occurs in distributed systems (especially messaging) where the process that you would like to instrument occurs on one machine or process and ends in another. James.Diagnostics has you covered in these scenarios.
public void SendMessage(Customer customer)
{
var message = new CustomerUpdate
{
Customer = Customer,
Success = true,
Start = DateTime.Now.ToUniversalTime()
};
_bus.Publish(message);
}
public void Consume(CustomerUpdate message)
{
var elapsed = DateTime.Now.ToUniversalTime().Subtract(message.Start);
message.Success
? Monitoring<CustomerService_CustomerUpdatedCounters>.Success(elapsed)
: Monitoring<CustomerService_CustomerUpdatedCounters>.Failure();
}
You will notice that in the case of the Failure()
method, there is no TimeSpan
to be provided. This is because a failure will either happen immediately or due to timeout, may be very lengthy. In either situation, you will not want to skew your average execution time metrics.