-
Notifications
You must be signed in to change notification settings - Fork 29
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implementation of letmeexplain.ai #521
base: main
Are you sure you want to change the base?
Conversation
Hey @letmeexplainAI , interesting proposal, and it's something we've contemplated adding to our dev docs already. Right now our first priority is getting actual documentation written, then training an LLM on it. The white paper already reflects a slightly out-of-date model of the living code, and will get more stale as time goes on. Can you tell me more about your product? Big questions are:
|
Hi,
Thank you for reaching out. Below are the answers to your questions
regarding our product.
1.
*Progressive Model Updates:* Yes, LME is built on a *RAG
(Retrieval-Augmented Generation)* architecture using open-source
technologies on AWS. Documentation updates can be managed through the
client dashboard, where you can manually add new content. Additionally, we
periodically scan existing URLs to ensure your knowledge base remains up to
date.
2.
*Base Model:* We use *OpenAI's 4o-mini* as the foundational model for
LME.
3.
*Pricing for Non-Profits: *We are an AI Lab and love nonprofits.
- The *whitepaper-based chatbot and general Q&A features* are available
for free.
- If your documentation dataset expands significantly or user traffic
increases, you have the option to *continue using the free bot on the
whitepaper* or transition to an affordable paid plan that primarily
covers our operational costs.
Please let us know if you have any further questions or if you'd like a
demo.
Thanks,
Owais
…On Wed, Feb 5, 2025 at 7:18 PM pdaoust ***@***.***> wrote:
Hey @letmeexplainAI <https://github.com/letmeexplainAI> , interesting
proposal, and it's something we've contemplated adding to our dev docs
already.
Right now our first priority is getting actual documentation written, then
training an LLM on it. The white paper already reflects a slightly
out-of-date model of the living code, and will get more stale as time goes
on. Can you tell me more about your product? Big questions are:
1. Will it progressively update its model as our documentation
improves? (I assume it's doing RAG or fine-tuning or building up a huge
context window or something)
2. What base model does it use?
3. What are your pricing plans for open-source projects stewarded by
non-profits like ours?
—
Reply to this email directly, view it on GitHub
<#521 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/BPAVMTZCGYKWTSHMZUNKJ332OJPZNAVCNFSM6AAAAABWN3L4T2VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDMMZXHAZDINBRHE>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
Hi there,
Following up on the previous email, I hope the responses provided addressed
your questions adequately.
Please let me know if you have any further questions or require additional
information. I'd be happy to schedule a call/demo at your convenience.
Best Regards,
Owais
On Fri, Feb 7, 2025 at 9:37 AM LetMeExplain AI ***@***.***>
wrote:
… Hi,
Thank you for reaching out. Below are the answers to your questions
regarding our product.
1.
*Progressive Model Updates:* Yes, LME is built on a *RAG
(Retrieval-Augmented Generation)* architecture using open-source
technologies on AWS. Documentation updates can be managed through the
client dashboard, where you can manually add new content. Additionally, we
periodically scan existing URLs to ensure your knowledge base remains up to
date.
2.
*Base Model:* We use *OpenAI's 4o-mini* as the foundational model for
LME.
3.
*Pricing for Non-Profits: *We are an AI Lab and love nonprofits.
- The *whitepaper-based chatbot and general Q&A features* are
available for free.
- If your documentation dataset expands significantly or user
traffic increases, you have the option to *continue using the free
bot on the whitepaper* or transition to an affordable paid plan
that primarily covers our operational costs.
Please let us know if you have any further questions or if you'd like a
demo.
Thanks,
Owais
On Wed, Feb 5, 2025 at 7:18 PM pdaoust ***@***.***> wrote:
> Hey @letmeexplainAI <https://github.com/letmeexplainAI> , interesting
> proposal, and it's something we've contemplated adding to our dev docs
> already.
>
> Right now our first priority is getting actual documentation written,
> then training an LLM on it. The white paper already reflects a slightly
> out-of-date model of the living code, and will get more stale as time goes
> on. Can you tell me more about your product? Big questions are:
>
> 1. Will it progressively update its model as our documentation
> improves? (I assume it's doing RAG or fine-tuning or building up a huge
> context window or something)
> 2. What base model does it use?
> 3. What are your pricing plans for open-source projects stewarded by
> non-profits like ours?
>
> —
> Reply to this email directly, view it on GitHub
> <#521 (comment)>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/BPAVMTZCGYKWTSHMZUNKJ332OJPZNAVCNFSM6AAAAABWN3L4T2VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDMMZXHAZDINBRHE>
> .
> You are receiving this because you were mentioned.Message ID:
> ***@***.***>
>
|
Thanks! Every question answered except one: what are your pricing plans? I don't see that on your website. I'd also like to know: how long have you been in business? Thanks! |
This PR integrates the LetMeExplain.ai chatbot into the Holochain developer docs to provide AI-powered responses to user queries. The chatbot is loaded via an external script and connects to a Large Language Model (LLM) to generate responses. The LLM is currently trained only on the Holochain Whitepaper.
Technical Details:
lme_chatbot_widget.js
) is hosted onhttps://dashboard.letmeexplain.ai
.orgId
to identify the client.Security and Privacy:
Support:
For access to the LetMeExplain.ai client dashboard or technical support, please contact: