Replies: 5 comments
-
🤖 Hey @AndiLeni, great to see you diving into another challenge! Let's tackle this one together. 🚀 The To resolve this issue, you can break down your large text source file into smaller chunks and process each chunk separately. Here's how you can do it: import { MistralAIEmbeddings } from "@langchain/mistralai";
/* Embed queries */
const embeddings = new MistralAIEmbeddings({
apiKey: process.env.MISTRAL_API_KEY,
});
// Assuming `largeTextSource` is your large text source file
const largeTextSource = ["Hello world", "Bye bye", ...]; // replace with your actual data
const chunkSize = 100; // adjust this value based on the server's limit
for (let i = 0; i < largeTextSource.length; i += chunkSize) {
const chunk = largeTextSource.slice(i, i + chunkSize);
const documentRes = await embeddings.embedDocuments(chunk);
console.log({ documentRes });
} In this code, we're splitting the The maximum request size limit for the MistralAIEmbeddings API is determined by the I hope this helps! If you have any other questions, feel free to ask. Sources
|
Beta Was this translation helpful? Give feedback.
-
@dosu-bot The issue is that requests are made too fast because only 5 requests per second are allowd. How can I slow down langchains request? |
Beta Was this translation helpful? Give feedback.
-
🤖 To slow down the requests to the MistralAI API, you can introduce a delay between each request. This can be achieved by using the Here's how you can modify your code: // Define a function that returns a promise that resolves after a delay
function delay(ms) {
return new Promise(resolve => setTimeout(resolve, ms));
}
// Use the delay function in the loop
for (const document of documents) {
const embedding = await embeddings.embed(document);
await vectorStore.add(embedding);
await delay(200); // Delay of 200ms to comply with the rate limit of 5 requests per second
} In this code, the Please note that this is a simple rate limiting solution and might not be suitable for all use cases. For more complex scenarios, you might want to consider using a more sophisticated rate limiting library that can handle things like backoff and retry strategies. For more information on how to use promises with setTimeout, you can refer to this MDN Web Docs guide. I hope this helps! If you have any other questions, feel free to ask. Sources
|
Beta Was this translation helpful? Give feedback.
-
This happens on the basic RAG example "https://docs.mistral.ai/guides/rag/". This should work with such a basic and small example no? |
Beta Was this translation helpful? Give feedback.
-
Hi @AndiLeni 👋🏻 You can try setting the export const sleep = async (ms: number) => new Promise((resolve) => { setTimeout(resolve, ms); }); |
Beta Was this translation helpful? Give feedback.
-
Hello,
I was following the docs and tried to index some documents with LanceDB and MistralAI.
However, I run into
Request size limit exceeded
errors when I create the embeddings.The text source file is rather large, it has approximately 35000 lines.
The code I tried is this one:
Has anyone an idea on how to fix this? Or can tell me what I was doing wrong.
Thanks and kind regards
Beta Was this translation helpful? Give feedback.
All reactions