-
Notifications
You must be signed in to change notification settings - Fork 16k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Beam #4996
Beam #4996
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
would be best to have an example notebook in docs/models/llms/integrations...
can also have the integration docs here, but should have an example there as well (thats where all the others are)
@hwchase17 Notebook added with examples in the integration docs as well as in the notebook :) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
couple comments
langchain/llms/beam.py
Outdated
prompt: str, | ||
stop: Optional[list] = None, | ||
run_manager: Optional[CallbackManagerForLLMRun] = None, | ||
**kwargs: Any, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we should adhere to signature on abstract class, so no kwargs. could we pass in app_id at instantiation instead
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When I tried doing this it was throwing a linting error because it was overriding the LLM superclass. Happy to pass it at instantiation, if preferred.
) | ||
llm._deploy() | ||
|
||
output = llm._call(prompt="Your prompt goes here") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: should be passed in as arg not kwarg
self.app_creation() | ||
self.run_creation() | ||
|
||
process = subprocess.run( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is there any way to avoid writing scripts and kicking of subprocesses? for example could we make the script in run_creation a standalone function and add to documentation that users should run that before instantiating the LLM?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure that this is the best way to approach it—a standalone static script wouldn't have the remote machine specs passed in during the instantiation of the LLM and would need to be updated anyways. Since the _deploy method isn't a required call(you can pass in already deployed app_ids to the _call method) would leaving this be alright?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
aren't most of the params used in script creation only on the LLM for script creation (like they're not used in the actual calls, right?). so you could have those params as args on the standalone function and remove them from the LLM?
happy to defer to you on what's best though
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Structurally, where would you want the standalone app_creation and run_creation functions to be? I can see how that might be preferred, just want to make sure that we structure it intuitively.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
im ok keeping it in this llms/beam.py file or adding a new utilities/beam.py file for them
Beam
Calls the Beam API wrapper to deploy and make subsequent calls to an instance of the gpt2 LLM in a cloud deployment. Requires installation of the Beam library and registration of Beam Client ID and Client Secret. Additional calls can then be made through the instance of the large language model in your code or by calling the Beam API.
Example notebook: https://colab.research.google.com/drive/13nigL8hQ6gCw9qzP1QEhkaWuSZ5cKOS0?usp=sharing
Twitter: @beam_cloud (edited)