Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Removing LangChain and Rebuilding Executor #1322

Merged
merged 13 commits into from
Sep 16, 2024
Merged

Conversation

joaomdmoura
Copy link
Collaborator

@joaomdmoura joaomdmoura commented Sep 13, 2024

We kept overriding the Agent Executor for long enough, in order to support some of the new features we want to add we need a finer control, so this is a big refactoring that removes langchain as a dependency while still supporting their tools.

This refactor also improves features like:

  • Number of max requests per minute
  • A maximum number of iterations before giving a final answer
  • Proper take advantage of system prompts
  • Get all of out tests back to green
  • Adds the ability to not use system prompt use_system_prompt on the Agent
  • Adds the ability to not use stop words (to support o1 models) use_stop_words on the Agent
  • Sliding context window gets renamed to respect_context_window, and enable by default
  • Delegation is now disabled by default (breaking change?)
  • I kept support from using the old LangChain models still with a workaround and there is a test for it
  • Inner prompts were slightly changed as well
  • Now we are properly using system prompts for the agent role playing
  • There is a new token calculation flow, the other way was wrong found a bug while doing this
  • Overall reliability and quality of results

@thehapyone
Copy link
Contributor

Not to hijack the original intention here but could this be an opportunity to adds support for user defined memory instance as well?

So instead of the current implementation like

# current
    @model_validator(mode="after")
    def create_crew_memory(self) -> "Crew":
        """Set private attributes."""
        if self.memory:
            self._long_term_memory = LongTermMemory()
            self._short_term_memory = ShortTermMemory(
                crew=self, embedder_config=self.embedder
            )
            self._entity_memory = EntityMemory(crew=self, embedder_config=self.embedder)
        return self
 
 # Allow custom user memory instances
     @model_validator(mode="after")
    def create_crew_memory(self) -> "Crew":
        """Set private attributes."""
        if self.memory:
            self._long_term_memory =self.long_term_memory if self.long_term_memory else LongTermMemory()
            self._short_term_memory = self.short_term_memory if self.short_term_memory else ShortTermMemory(
                crew=self, embedder_config=self.embedder
            )
            self._entity_memory = self.entity_memory if self.entity_memory else EntityMemory(crew=self, embedder_config=self.embedder)
        return self

One advantage of this is when user that wants to use different database aside from Chroma for example or some more powerful persistence RAG instance to power the memory.

What do you think?

@joaomdmoura
Copy link
Collaborator Author

I like the idea @thehapyone but will separate in another PR given how big this one got 😅

@joaomdmoura joaomdmoura merged commit e77442c into main Sep 16, 2024
4 checks passed
@thehapyone
Copy link
Contributor

I like the idea @thehapyone but will separate in another PR given how big this one got 😅

I have working setup and I can create a PR for that soon.

@hiddenkirby
Copy link

I like the idea @thehapyone but will separate in another PR given how big this one got 😅

I have working setup and I can create a PR for that soon.
Yes, please! Allowing the ability to point to a remote Chromadb as well.

@thehapyone
Copy link
Contributor

I like the idea @thehapyone but will separate in another PR given how big this one got 😅

I have working setup and I can create a PR for that soon.

Implemented here - #1339

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants