-
Notifications
You must be signed in to change notification settings - Fork 4.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Simple chat app for beginners #919
Comments
Happy to see this. A few thoughts though: I feel a big contributor to the issue the design of ConversableAgent. The current implementation has console I/O hardcoded rather than using an abstraction. Re the simple web app sample:
🙏🏾 |
|
So that looks like a CLI app - basically what's in the quick start doc. I would like to see the same but with a web UI. If you use a framework like Streamlit or Reflex it will avoid a web API layer. |
#740 is asking the same as me. |
If somebody wants to write a simple UI to demo basic autogen functionality, I'm sure we would add a link to it in the repo and docs. |
fwiw @victordibia 's early version here https://github.com/victordibia/autogen-ui seems like the right level. Even though the same layers exist in that as in the latest version included in this repo, those layers are far simpler to follow. It needs a little work updating to work with autogen v0.2.* |
There are more UI examples using Streamlit, Chainlit, etc., which haven't been added to https://microsoft.github.io/autogen/docs/Gallery. |
PR for autogen v0.2.*. It was pretty simple to fix. /cc @victordibia There's value in this particular example as it's a web app version of the example in getting started with the same prompt. I don't know you alls criteria to include in this repo but linking in the getting started next to the CLI example is probably a useful. 🙂 |
* begin annotation in automl.py and ml.py * EstimatorSubclass + annotate metric * review: fixes + setting fit_kwargs as proper Optional * import from flaml.automl.model (import from flaml.model is deprecated) * comment n_jobs in train_estimator as well * better annotation in _compute_with_config_base Co-authored-by: Qingyun Wu <qingyun.wu@psu.edu> --------- Co-authored-by: Andrea W <a.ruggerini@ammagamma.com> Co-authored-by: Qingyun Wu <qingyun.wu@psu.edu>
Closing this issue due to inactivity. If you have further questions, please open a new issue or join the discussion in AutoGen Discord server: https://discord.com/invite/Yb5gwGVkE5 |
As a first step, many new users struggle to run the NVDA-TESLA example or the notebooks, since they all involve various kinds of complexity. Our discord users often ask for simpler sample code that just replicates a basic, chatgpt-like conversational experience. Such an applet would also be convenient for anyone to use when testing new model configuration settings, or a new LLM.
The text was updated successfully, but these errors were encountered: