Skip to content

Commit

Permalink
Add tests + AddGraphProgram tool + various fixes
Browse files Browse the repository at this point in the history
  • Loading branch information
YoanSallami committed Sep 25, 2024
1 parent 8b0ffff commit 759b2d4
Show file tree
Hide file tree
Showing 18 changed files with 403 additions and 102 deletions.
115 changes: 75 additions & 40 deletions README.md

Large diffs are not rendered by default.

2 changes: 1 addition & 1 deletion docs/Core API/Graph Program.md
Original file line number Diff line number Diff line change
Expand Up @@ -172,7 +172,7 @@ CREATE
(answer)-[:NEXT]->(end)
"""

main = gp.GraphProgram().from_cypher(cypher)
main = gp.GraphProgram(name="main").from_cypher(cypher)

```

Expand Down
48 changes: 46 additions & 2 deletions docs/FAQ.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,10 +2,54 @@

## Frequently Asked Questions

### Why HybridAGI?

We are dissatisfied with the current trajectory of agent-based systems that lack control and efficiency. Today's approach involves building React/MKRL agents that operate independently without human control, often leading to infinite loops of nonsense due to their tendency to stay within their data distribution. Multi-agent systems attempt to address this issue, but they often result in more nonsense and prohibitive costs due to the agents' chitchat. Additionally, today's agents often require fine-tuning to enhance or correct their behavior, which can be a time-consuming and complex process.

With HybridAGI, the only thing you need to do is modify the behavior graph (the graph programs). We believe that fine-tuning should be a last resort when in-context learning fails to yield the desired results. By rooting cognitive sciences into computer science concepts, we empower programmers to build the agent system of their dreams by controlling the sequence of action and decision. Our goal is to build an agent system that can solve real-world problems by using an intermediary language that is interpretable by both humans and machines. If we want to keep humans in the loop in the coming years, we need to design agent systems for that purpose.

### What is the difference between LangGraph and HybridAGI?

TODO
LangGraph is built on top of LangChain, which was also the case for HybridAGI last year. However, given the direction of the LangChain team towards encouraging ReACT agents that lack control and explainability, we switched to DSPy, which provides better value by focusing on pipelines optimization. Recently, LangGraph has emerged to compensate for the poor decision-making of LangChain, but we had already proven the value of our work. Moreover, LangGraph, like many agentic frameworks, describes a static finite state machine. Our vision of AGI systems is that being Turing complete is required, which is the case for many agentic frameworks, but having the capability of programming itself on the fly (meaning real continuous learning) is also required to truly begin the AGI journey, which is lacking in other frameworks.

### What is the difference between Llama-Index and HybridAGI?

TODO
Llama-Index recently released an event-driven agent system, similar to LangGraph, it is a static state machine, and the same remarks apply to their work.

### What is the difference between DSPy and HybridAGI?

HybridAGI is built on top of the excellent work of the DSPy team, and it is intended as an abstraction to simplify the creation of complex DSPy programs in the context of LLM Agents. DSPy is more general and is also used for simpler tasks that don't need agentic systems. Unlike DSPy, our programs are not static but dynamic and can adapt to the user query by dynamically calling programs stored in memory. Moreover, we focus our work on explainable neuro-symbolic AGI systems using Graphs. The graph programs are easier to build than implementing them from scratch using DSPy. If DSPy is the PyTorch of LLM applications, think of HybridAGI as the Keras or HuggingFace of neuro-symbolic LLM agents.

### What is the difference between OpenAI o1 and HybridAGI?

OpenAI o1 and HybridAGI share many common goals, but they are built with different paradigms in mind. Like OpenAI o1, HybridAGI uses multi-step inferences and is a goal-oriented agent system. However, unlike OpenAI o1, we guide the CoT trace of our agent system instead of letting it explore freely its action space, a paradigm more similar to an A* where the Agent navigates in a defined graph instead of a Q-learning one. This results in more efficient reasoning, as experts can program it to solve a particular use case. We can use smaller LLMs, reducing the environmental impact and increasing the ROI. The downside of our technology is that you need expert knowledge in your domain as well as in programming and AI systems to best exploit its capabilities. For that reason, we provide audit, consulting, and development services to people and companies that lack the technical skills in AI to implement their system.

### Who are we?

We're not based in Silicon Valley or part of a big company; we're a small, dedicated team from the south of France. Our focus is on delivering an AI product where the user maintains control. We're dissatisfied with the current trajectory of agent-based products. We are experts in human-robot interactions and building interactive systems that behave as expected. While we draw inspiration from cognitive sciences and symbolic AI, we aim to keep our concepts grounded in computer science for a wider audience.

Our mission extends beyond AI safety and performance; it's about shaping the world we want to live in. Even if programming becomes obsolete in 5 or 10 years, replaced by some magical prompt, we believe that traditional prompts are insufficient for preserving jobs. They're too simplistic and fail to accurately convey intentions.

In contrast, programming each reasoning step demands expert knowledge in prompt engineering and programming. Surprisingly, it's enjoyable and not that difficult for programmers, as it allows you to gain insight into how AI truly operates by controlling it. Natural language combined with algorithms opens up endless possibilities. We can't envision a world without it.

### How do we make money?

We are providing audit, consulting, and development services for businesses that want to implement neuro-symbolic AI solutions in various domains, from computer vision to high-level reasoning with knowledge graph/ontology systems in critical domains like health, biology, financial, aerospace, and many more.

HybridAGI is a research project to showcase our capabilities but also to bring our vision of safe AGI systems for the future. We are a bootstrapped start-up that seeks real-world use cases instead of making pretentious claims to please VCs and fuel the hype.

Because our vision of LLMs capabilities is more moderate than others, we are actively looking to combine different fields of AI (evolutionary, symbolic, and deep learning) to make this leap into the future without burning the planet by relying on scaling alone. Besides the obvious environmental impacts, by relying on small/medium models, we have a better understanding and the capability to make useful research without trillion-worth datacenters.

HybridAGI is our way to be prepared for that future and at the same time, showcase our understanding of modern and traditional AI systems. HybridAGI is the proof that you don't need billion of dollars to work on AGI systems, and that a small team of passionate people can make the difference.

### Why did we release our work under GNU GPL?

We released HybridAGI under GNU GPL for various reasons, the first being that we want to protect our work and the work of our contributors. The second reason is that we want to build a future for people to live in, without being dependent on Big AI tech companies, we want to empower people not enslave them by destroying the market and leaving people jobless without a way to become proprietary of their knowledge. HybridAGI is a community project, by the community, for the community. Finally, HybridAGI is a way to connect with talented and like-minded people all around the world and create a community around a desirable future.

### Is HybridAGI just a toolbox?

Some could argue that HybridAGI is just a toolbox. However, unlike LangChain or Llama-Index, HybridAGI has been designed from the ground up to work in synergy with a special-purpose LLM trained on our DSL/architecture. We have enhanced our software thanks to the community and because we are the ones who created our own programming language, we are also the best people to program it. We have accumulated data and learned many augmentation techniques and cleaned our datasets during the last year of the project to keep our competitive advantage. We might release the LLM we are building at some point in time when we decide that it is beneficial for us to do so.

### Can I use HybridAGI commercially?

Our software is released under GNU GPL license to protect ourselves and the contributions of the community. The logic of your application being separated (the graph programs) there is no IP problem for you to use HybridAGI. Moreover, when used in production, you surely want to make a FastAPI server to request your agent and separate the backend and frontend of your app (like a website), so the GPL license doesn't contaminate the other pieces of your software. We also provide dual-licensing for our clients if needed.
3 changes: 0 additions & 3 deletions docs/Modules API/Agents/Graph Program Interpreter.md

This file was deleted.

37 changes: 37 additions & 0 deletions docs/Modules API/Agents/Graph interpreter Agent.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,37 @@
# Graph Interpreter Agent

The `GraphInterpreterAgent` is the agent system that execute the Cypher software stored in memory, it can branch over the graph programs by asking itself question when encountering decision steps, and use tools when encountering an Action step and jump to other programs when encountering Program steps.

## Usage

```python
from hybridagi.modules.agents import GraphInterpreterAgent
from hybridagi.core.datatypes import AgentState
from hybridagi.modules.agents.tools import PredictTool, SpeakTool

agent_state = AgentState()

tools = [
PredictTool(),
SpeakTool(
agent_state = agent_state,
)
]

agent = GraphInterpreterAgent(
agent_state = agent_state, # The agent state
program_memory = program_memory, # The program memory where the graph programs are stored
embeddings = None, # The embeddings to use when storing the agent steps (optional, default to None)
trace_memory = None, # The trace memory to store the agent steps (optional, default to None)
tools = tools, # The list of tools to use for the agent
entrypoint = "main" # The entrypoint for the graph programs (default to main)
num_history = 5, # The number of last steps to remember in the agent context (Default to 5)
commit_decision_steps = False, # Weither or not to use the decision steps in the agent context (default to False)
decision_lm = None, # The decision language model to use if different from the one configured (optional, default to None)
verbose = True, # Weither or not to display the colorful trace when executing the program (default to True)
debug = False, # Weither or not to raise exceptions during the execution of a program (default to False)
)

result = agent(Query(text="What is the capital of France?"))

```
25 changes: 25 additions & 0 deletions docs/Modules API/Agents/Tools/Ask User.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@


The `AskUser` Tool is usefull to ask information to the user and

## Output

```python
```

## Usage

```python

ask_user = AskUserTool(
name = "AskUser" # The name of the tool
agent_state = agent_state, # The state of the agent
simulated = True, # Weither or not to simulate the user using a LLM
func = None, # Callable function to integrate with front-end (optional)
lm =
)
```

### Integrate it with Gradio

TODO
2 changes: 1 addition & 1 deletion docs/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ Welcome to HybridAGI documentation, you will find the ressources to understand a

### LLM Agent as Graph VS LLM Agent as Graph Interpreter

What makes our approach different from Agent as Graph is the fact that our Agent system is *not a static finite state machine*, but an interpreter that can read/write and execute node by node a *dynamic graph data (the graph programs) structure separated from that process*. Making possible for the Agent to learn by executing, reading and modifying the graph programs (like any other data), in its essence HybridAGI is intended to be a self-programming system centered around the Cypher language.
What makes our approach different from Agent as Graph (like LangGraph or LLama-Index) is the fact that our Agent system is *not a static finite state machine*, but an interpreter that can read/write and execute node by node a *dynamic graph data* (the graph programs) structure separated from that process. Making possible for the Agent to learn by executing, reading and modifying the graph programs (like any other data), in its essence HybridAGI is intended to be a self-programming system centered around the Cypher language.

## Install

Expand Down
24 changes: 12 additions & 12 deletions hybridagi/core/graph_program.py
Original file line number Diff line number Diff line change
Expand Up @@ -42,15 +42,15 @@ class Action(BaseModel):
tool: str = Field(description="The tool name")
purpose: str = Field(description="The action purpose")
prompt: Optional[str] = Field(description="The prompt used to infer to tool inputs")
inputs: Optional[List[str]] = Field(description="The input for the prompt", default=[])
output: Optional[str] = Field(description="The variable to store the action output", default=None)
var_in: Optional[List[str]] = Field(description="The list of input variables for the prompt", default=[])
var_out: Optional[str] = Field(description="The variable to store the action output", default=None)
disable_inference: bool = Field(description="Weither or not to disable the inference", default=False)

class Decision(BaseModel):
id: str = Field(description="Unique identifier for the step")
purpose: str = Field(description="The decision purpose")
question: str = Field(description="The question to assess")
inputs: Optional[List[str]] = Field(description="The input prompt variables", default=[])
var_in: Optional[List[str]] = Field(description="The list of input variables for the prompt", default=[])

class Program(BaseModel):
id: str = Field(description="Unique identifier for the step")
Expand Down Expand Up @@ -296,8 +296,8 @@ def from_cypher(self, cypher_query: str) -> Optional["GraphProgram"]:
purpose=step_props["purpose"],
tool=step_props["tool"],
prompt=step_props["prompt"],
inputs=step_props["inputs"] if "inputs" in step_props else [],
output=step_props["output"] if "output" in step_props else None,
var_in=step_props["var_in"] if "var_in" in step_props else [],
var_out=step_props["var_out"] if "var_out" in step_props else None,
disable_inference=True if "disable_inference" in step_props else False,
))
elif step_type == "Decision":
Expand All @@ -311,7 +311,7 @@ def from_cypher(self, cypher_query: str) -> Optional["GraphProgram"]:
id=step_props["id"],
purpose=step_props["purpose"],
question=step_props["question"],
inputs=step_props["inputs"] if "inputs" in step_props else [],
var_in=step_props["var_in"] if "var_in" in step_props else [],
))
elif step_type == "Program":
if "id" not in step_props:
Expand Down Expand Up @@ -364,10 +364,10 @@ def to_cypher(self):
}
if step.prompt:
args["prompt"] = step.prompt
if step.inputs and len(step.inputs) > 0:
args["inputs"] = step.inputs
if step.output:
args["output"] = step.output
if step.var_in and len(step.var_in) > 0:
args["var_in"] = step.var_in
if step.var_out:
args["var_out"] = step.var_out
if step.disable_inference is True:
args["disable_inference"] = True
cleaned_args = re.sub(key_quotes_regex, sub_regex, json.dumps(args, indent=2))
Expand All @@ -378,8 +378,8 @@ def to_cypher(self):
"purpose": step.purpose,
"question": step.question,
}
if len(step.inputs) > 0:
args["inputs"] = step.inputs
if len(step.var_in) > 0:
args["var_in"] = step.var_in
cleaned_args = re.sub(key_quotes_regex, sub_regex, json.dumps(args, indent=2))
cypher += f"\n({step_id}:Decision "+cleaned_args+"),"
elif isinstance(step, Program):
Expand Down
1 change: 1 addition & 0 deletions hybridagi/loaders/dataset_loader.py
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
# TODO
1 change: 1 addition & 0 deletions hybridagi/loaders/graph_program_loader.py
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
# TODO
Loading

0 comments on commit 759b2d4

Please sign in to comment.