-
Notifications
You must be signed in to change notification settings - Fork 189
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[ISSUE] record.wait_for_feedback_results() with TruLlama not recording results #1638
Labels
bug
Something isn't working
Comments
Hi @sfc-gh-pdharmana , any idea on the above? thank you! |
Hey @paul-gleeson - I was able to get this working. Here's a slimmed down version of your code I added on to the TruLlama quickstart to test: os.environ["HUGGINGFACE_API_KEY"] = "hf_..."
from trulens.providers.huggingface import Huggingface
hugs = Huggingface()
f_pii_detection_input = Feedback(hugs.pii_detection).on_input()
f_pii_detection_output = Feedback(hugs.pii_detection).on_output()
f_toxicity = Feedback(hugs.toxic).on_input()
f_positive_sentiment = Feedback(hugs.positive_sentiment).on_output()
tru_query_engine_recorder = TruLlama(
query_engine,
app_name="LlamaIndex_App",
app_version="base",
feedbacks=[f_pii_detection_input, f_pii_detection_output, f_toxicity, f_positive_sentiment],
)
def interact_with_model(prompt_input):
with tru_query_engine_recorder as recording:
llm_response = query_engine.query(prompt_input)
# Get the record & extract feedback results
record = recording.get()
feedback_results_list = []
for feedback, result in record.wait_for_feedback_results().items():
feedback_results_list.append((feedback.name, result.result))
print(feedback.name, result.result)
return llm_response
interact_with_model("What did the author do growing up?") returns:
|
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Bug Description
I'm using huggingface as the provider to generate feedback from a RAG model that uses TruLlama as the base of the feedback recorder. Even though I'm using record.wait_for_feedback_results(), I'm not seeing any feedback results from the responses of my RAG model. I'm following the same code structure that I used for a LLM response except in the instance I used LLMChain instead of TruLlama.
To Reproduce
Here is my code:
Relevant Logs/Tracebacks
[('pii_detection', None), ('pii_detection', None), ('positive_sentiment', None), ('toxic', None)]
Environment:
Additional context
With the current set up the evaluation metrics are working as expected on the LLM.
The text was updated successfully, but these errors were encountered: