Skip to content

Commit

Permalink
feat: the core of the framework in place, with a simple example in th…
Browse files Browse the repository at this point in the history
…e README.md (#1)
  • Loading branch information
sradc authored Oct 4, 2023
1 parent 564a49e commit 5dc07fb
Show file tree
Hide file tree
Showing 17 changed files with 1,216 additions and 60 deletions.
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
openai.key
environment
README.tmp.ipynb

Expand Down
9 changes: 7 additions & 2 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -9,9 +9,14 @@ test:
# Runs the nb, to generate output / figures
execute_readme:
poetry run python -m nbconvert --to notebook --execute README.ipynb --output README.tmp.ipynb
# Note turned off execute_readme dependency for now, when running `readme`,
# because the `input(..)` stuff prevents it from working
# remember to use the tempfile when restore, e.g:
# readme
# poetry run python -m nbconvert --to markdown --output README.md README.tmp.ipynb \
readme: execute_readme
poetry run python -m nbconvert --to markdown --output README.md README.tmp.ipynb \
readme:
poetry run python -m nbconvert --to markdown --output README.md README.ipynb \
&& poetry run python scripts/replace_readme_image_links.py

# run semantic release, publish to github + pypi
Expand Down
179 changes: 176 additions & 3 deletions README.ipynb

Large diffs are not rendered by default.

149 changes: 147 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,155 @@
<!-- Warning, README.md is autogenerated from README.ipynb, do not edit it directly -->

`pip install make_agents`

[![](https://github.com/sradc/make_agents/workflows/Python%20package/badge.svg?branch=main)](https://github.com/sradc/make_agents/commits/)

<p align="center">
<img src="https://raw.githubusercontent.com/sradc/MakeAgents/master/README_files/make_agents_logo.jpg" width=256>
</p>

# MakeAgents

WIP
MakeAgents is a micro framework for creating LLM-powered agents.
It consists of tools and a paridigm for creating agents.

## Quickstart examples

### Example 1: A simple conversational agent


```python
import json
import pprint

import make_agents as ma

from pydantic import BaseModel, Field
```


```python
# Define the functions the agent will use


class MessageUserArg(BaseModel):
question: str = Field(description="Question to ask user")


@ma.llm_func
def message_user(arg: MessageUserArg):
"""Send the user a message, and get their response."""
response = ""
while response == "":
response = input(arg.question).strip()
return response


class LogNameArg(BaseModel):
first_name: str = Field(description="User's first name")
last_name: str = Field(description="User's last name")


@ma.llm_func
def log_name(arg: LogNameArg):
"""Log the name of the user. Only do this if you are certain."""
return {"first_name": arg.first_name, "last_name": arg.last_name}


# Define the agent, as a graph of functions
agent_graph = {
ma.Start: [message_user],
message_user: [message_user, log_name],
}
display(ma.draw_graph(agent_graph))

# Initialise the message stack with a system prompt
messages_init = [
{
"role": "system",
"content": "Get the first and last name of the user.",
}
]

# Run the agent
for messages in ma.run_agent(agent_graph, messages_init):
pprint.pprint(messages[-1], indent=2)
print()
print(f"Retrieved user_name: {json.loads(messages[-1]['content'])}")
```



![png](https://raw.githubusercontent.com/sradc/MakeAgents/master/README_files/README_3_0.png)



{ 'content': None,
'function_call': { 'arguments': '{"next_function": "message_user"}',
'name': 'select_next_func'},
'role': 'assistant'}

{ 'content': '{"next_function": "message_user"}',
'name': 'select_next_func',
'role': 'function'}

{ 'content': None,
'function_call': { 'arguments': '{"question": "What is your first name?"}',
'name': 'message_user'},
'role': 'assistant'}

{ 'content': '"Uh, well, it\'s Bill"',
'name': 'message_user',
'role': 'function'}

{ 'content': None,
'function_call': { 'arguments': '{"next_function": "message_user"}',
'name': 'select_next_func'},
'role': 'assistant'}

{ 'content': '{"next_function": "message_user"}',
'name': 'select_next_func',
'role': 'function'}

{ 'content': None,
'function_call': { 'arguments': '{"question": "And what is your last name?"}',
'name': 'message_user'},
'role': 'assistant'}

{ 'content': '"And that... would be BoBaggins"',
'name': 'message_user',
'role': 'function'}

{ 'content': None,
'function_call': { 'arguments': '{"next_function": "log_name"}',
'name': 'select_next_func'},
'role': 'assistant'}

{ 'content': '{"next_function": "log_name"}',
'name': 'select_next_func',
'role': 'function'}

{ 'content': None,
'function_call': { 'arguments': '{\n'
'"first_name": "Bill",\n'
'"last_name": "BoBaggins"\n'
'}',
'name': 'log_name'},
'role': 'assistant'}

{ 'content': '{"first_name": "Bill", "last_name": "BoBaggins"}',
'name': 'log_name',
'role': 'function'}

Retrieved user_name: {'first_name': 'Bill', 'last_name': 'BoBaggins'}


### Notes:

Prompting has a big impact on the performance of the agent. The `llm_func` function names, Pydantic models and docstrings can all be considered part of the prompt.


### Setup
### Dev setup

- Clone the repo and `cd` into it
- Run `poetry install`
Expand Down
Binary file added README_files/README_3_0.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added README_files/README_3_0.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added README_files/README_6_0.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added README_files/README_6_0.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added README_files/make_agents_logo.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
12 changes: 12 additions & 0 deletions TODO
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
- more examples (with some api call stuff, e.g. to search engines, etc.)
- gifs for the examples
- ability to check number of tokens with tiktoken
- summariser / reducer (e.g. run before making llm call)
- document ways to stop / wait for confirmation before running functions (e.g. when powerful)
- more unit test coverage
- squash / rebase branch before merging into main, then archive
- docs
- functions with no args
- tutorials

- [x] ability to configure completion, e.g. model etc.
3 changes: 3 additions & 0 deletions make_agents/__init__.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,6 @@
from importlib_metadata import version

__version__ = version(__package__)

# Expose the main api:
from make_agents.make_agents import Start, draw_graph, llm_func, run_agent # noqa: F401
21 changes: 21 additions & 0 deletions make_agents/gpt.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
import openai
from tenacity import (
retry,
retry_if_exception_type,
stop_after_attempt,
wait_random_exponential,
)


def get_completion(model: str = "gpt-3.5-turbo", **kwargs) -> callable:
@retry(
retry=retry_if_exception_type(
(openai.error.Timeout, openai.error.RateLimitError)
),
wait=wait_random_exponential(min=0, max=60),
stop=stop_after_attempt(6),
)
def completion(**kwargs2):
return openai.ChatCompletion.create(model=model, **kwargs, **kwargs2)

return completion
Loading

0 comments on commit 5dc07fb

Please sign in to comment.