Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Beam #4996

Merged
merged 25 commits into from
May 24, 2023
Merged

Beam #4996

merged 25 commits into from
May 24, 2023

Conversation

NolanTrem
Copy link
Contributor

@NolanTrem NolanTrem commented May 19, 2023

Beam

Calls the Beam API wrapper to deploy and make subsequent calls to an instance of the gpt2 LLM in a cloud deployment. Requires installation of the Beam library and registration of Beam Client ID and Client Secret. Additional calls can then be made through the instance of the large language model in your code or by calling the Beam API.

Example notebook: https://colab.research.google.com/drive/13nigL8hQ6gCw9qzP1QEhkaWuSZ5cKOS0?usp=sharing
Twitter: @beam_cloud (edited)

langchain/llms/beam.py Outdated Show resolved Hide resolved
langchain/llms/beam.py Outdated Show resolved Hide resolved
@dev2049 dev2049 added the 03 enhancement Enhancement of existing functionality label May 19, 2023
@NolanTrem NolanTrem marked this pull request as ready for review May 20, 2023 04:49
Copy link
Contributor

@hwchase17 hwchase17 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

would be best to have an example notebook in docs/models/llms/integrations...

can also have the integration docs here, but should have an example there as well (thats where all the others are)

@NolanTrem NolanTrem requested a review from hwchase17 May 21, 2023 19:27
@NolanTrem
Copy link
Contributor Author

@hwchase17 Notebook added with examples in the integration docs as well as in the notebook :)

Copy link
Contributor

@dev2049 dev2049 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

couple comments

docs/modules/models/llms/integrations/beam.ipynb Outdated Show resolved Hide resolved
langchain/llms/beam.py Outdated Show resolved Hide resolved
langchain/llms/beam.py Outdated Show resolved Hide resolved
@NolanTrem NolanTrem changed the title Beam integration and unit test Beam May 22, 2023
prompt: str,
stop: Optional[list] = None,
run_manager: Optional[CallbackManagerForLLMRun] = None,
**kwargs: Any,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we should adhere to signature on abstract class, so no kwargs. could we pass in app_id at instantiation instead

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When I tried doing this it was throwing a linting error because it was overriding the LLM superclass. Happy to pass it at instantiation, if preferred.

)
llm._deploy()

output = llm._call(prompt="Your prompt goes here")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: should be passed in as arg not kwarg

self.app_creation()
self.run_creation()

process = subprocess.run(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is there any way to avoid writing scripts and kicking of subprocesses? for example could we make the script in run_creation a standalone function and add to documentation that users should run that before instantiating the LLM?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure that this is the best way to approach it—a standalone static script wouldn't have the remote machine specs passed in during the instantiation of the LLM and would need to be updated anyways. Since the _deploy method isn't a required call(you can pass in already deployed app_ids to the _call method) would leaving this be alright?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

aren't most of the params used in script creation only on the LLM for script creation (like they're not used in the actual calls, right?). so you could have those params as args on the standalone function and remove them from the LLM?

happy to defer to you on what's best though

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Structurally, where would you want the standalone app_creation and run_creation functions to be? I can see how that might be preferred, just want to make sure that we structure it intuitively.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

im ok keeping it in this llms/beam.py file or adding a new utilities/beam.py file for them

@dev2049 dev2049 added the lgtm PR looks good. Use to confirm that a PR is ready for merging. label May 24, 2023
@dev2049 dev2049 merged commit faa2665 into langchain-ai:master May 24, 2023
@danielchalef danielchalef mentioned this pull request Jun 5, 2023
This was referenced Jun 25, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
03 enhancement Enhancement of existing functionality lgtm PR looks good. Use to confirm that a PR is ready for merging.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants