THIS PROJECT IS CURRENTLY A PROOF OF CONCEPT. DON'T EXPECT EVERYTHING TO WORK! CHECK OUT THE TODO LIST FOR MORE INFO.
We want to
- Make it easy to use the Hugging Face Transformers Agent.
We provide
- the
TransformersAgentUI
component which can be used in the notebook as a web app - a deployment of the web app on Hugging Face Spaces
You can install and use the package as simple as.
pip install transformers-agent-ui
In a notebook just run
import panel as pn
from transformers_agent_ui import TransformersAgentUI
pn.extension("terminal", "notifications", notifications=True, design="bootstrap")
TransformersAgentUI()
To serve as a web app create the file app.py
import panel as pn
from transformers_agent_ui import TransformersAgentUI
if pn.state.served:
pn.extension("terminal", "notifications", notifications=True, design="bootstrap")
TransformersAgentUI().servable()
and run
BOKEH_RESOURCES=cdn panel serve app.py
In order for you to run the inference you will have to provide tokens. Preferably via the environment variables
HUGGING_FACE_TOKEN
OPEN_AI_TOKEN
Alternatively you can provide them on the Settings tab in the app.
Install transformers-agent-ui
including the examples
dependencies.
pip install transformers-agent-ui[examples]
Explore the sample apps
pn hello transformers-agent-ui
You can now find the reference and gallery notebooks in the examples/awesome-panel/transformers-agent-ui
folder. Check them out by running jupyter lab
.
Click one of the buttons
Please support Panel and awesome-panel by giving the projects a star on Github:
Thanks
If you are looking to contribute to this project you can find ideas in the issue tracker. To get started check out the DEVELOPER_GUIDE.
I would love to support and receive your contributions. Thanks.
- Implement TokenManager
- Save every run - also when cache is hit
- Add notification if no token is available
- Deploy to Pypi. DONE - See link
- Rename
running
parameter tois_running
parameter - Move use_cache from settings tab to editor tab
- Save run prints to store
- Redirect log to Terminal AND to stdout for easier debugging
- Support dynamic arguments (text, image etc) to run function
- As inputs to .run
- Create/ Update from file
- Create/ Update from output
- Delete/ Remove
- Get better feedback on run exceptions back
- Test it on lots of examples
- Add specific support for reading and writing more types. Currently most things are pickled.
- torch.Tensor is often returned and can be saved
- Add specific support for reading and writing more types. Currently most things are pickled.
- Add three examples to make it easy to get started
- Support enabled
remote
parameter setting. Currently we only supportremote
. - Don't save asset if from cache. Instead reuse.
- Make the Cache/ Store useful by providing an interface
- Multi user support
- Restrict logs to user session
- See also hf #23354
- Restrict store to user session
- Make application non-blocking when used by multiple users
- Restrict logs to user session
- Deploy to Hugging Face. I probably wont be able to as the app needs a lot of power when running locally.
- Build a sample store and collect data on runs
- Enable downloading the output value
- Help the app and/ or Panel display the outputs. For example when a tensor is returned Panel
- Specifically test that the output of each tool is supported does not know how to display that. And the agent fails if we ask it to return the value wrapped in a HoloViz Panel Audio Pane. See Panel #4836
- Profile the
run
and figure out if for example agents should be reused/ cached - Prompt seems to be cut of when displayed in the UI. Fix this
- Make the project more specific by calling it hfagent-ui and providing HFAgentUI,
- Make the project more general as in agent-ui. We could provide HFAgentUI, LangChainAgentUI etc. Or maybe even just AgentUI. Much of the code would be reusable.
- Run inference async to make app more performant if possible
- Support chat mode.