Skip to content

How good is fine-tuning with helping generating queries? #504

Answered by aazo11
vemonet asked this question in Q&A
Discussion options

You must be logged in to vote

Based on our benchmarking, while fine-tuning does reduce token usage and latency it does not improve accuracy with GPT-3.5. We have early access to GPT-4 fine-tuning and that shows better results, though the fine-tuning cost is quite high. You should always keep the RAG elements, and Dataherald allows you to deploy a fine-tuned model within the agent framework.

In terms of training data, you need ~10 samples per table at a minimum to see results.

Replies: 1 comment

Comment options

You must be logged in to vote
0 replies
Answer selected by vemonet
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants