Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Observability Feature Request: Metrics, Data Monitoring, and Graph Visualization #5767

Closed
naman-modi opened this issue Jun 6, 2023 · 5 comments

Comments

@naman-modi
Copy link
Contributor

Feature request

I would like to propose the addition of an observability feature to LangChain, enabling developers to monitor and analyze their applications more effectively. This feature aims to provide metrics, data tracking, and graph visualization capabilities to enhance observability.

Key aspects of the proposed feature include:

  • Metrics Tracking: Capture time taken by LLM model to handle request, errors, number of tokens, and costing indication for the particular LLM.
  • Data Tracking: Log and store prompt, request, and response data for each LangChain interaction.
  • Graph Visualization: Generate basic graphs over time, depicting metrics such as request duration, error occurrences, token count, and cost.

This feature request aims to improve the development and debugging experience by providing developers with better insights into the performance, behavior, and cost implications of their LangChain applications.

Motivation

Last week, while I was creating an application using OpenAI's GPT 3.5 APIs to develop a chat-based solution for answering SEO-related queries, I encountered a significant challenge. During the launch, I realized that I had no means to monitor the responses suggested by the language model (GPT, in my case). Additionally, I had no insights into its performance, such as speed or token usage. To check the token count, I had to repeatedly log in to the OpenAI dashboard, which was time-consuming and cumbersome.

It was frustrating not having a clear picture of user interactions and the effectiveness of the system's responses. I realised that I needed a way to understand the system's performance in handling different types of queries and identify areas that required improvement.

I strongly believe that incorporating an observability feature in LangChain would be immensely valuable. It would empower developers like me to track user interactions, analyze the quality of responses, and measure the performance of the underlying LLM requests. Having these capabilities would not only provide insights into user behavior but also enable us to continuously improve the system's accuracy, response time, and overall user experience.

Your contribution

Yes, I am planning to raise a PR along with a couple of my friends to add an observability feature to Langchain.

I would be more than happy to take suggestions from the community, on what we could add to make it more usable!

@OlajideOgun
Copy link
Contributor

Have you tried graphsignal?
https://graphsignal.com/docs/integrations/langchain/

@naman-modi
Copy link
Contributor Author

naman-modi commented Jun 19, 2023

Hey @OlajideOgun, thanks for the suggestion! Graphsignal integrates LangChain within it, and I wanted more of a solution which had integration within langchain, in the form of callbacks to make it more usable.

So, we finally managed to create an Infino integration with #6218 for the above use cases.

@OlajideOgun
Copy link
Contributor

Just cheked it out this is very cool!

@naman-modi
Copy link
Contributor Author

naman-modi commented Jun 20, 2023

Thanks! If you plan to use it, please do share the feedback/improvements :)

@naman-modi
Copy link
Contributor Author

Closing the issue, as #6218 is merged.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants