Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: no longer support role arguments #1095

Merged
merged 1 commit into from
Jan 18, 2025
Merged

feat: no longer support role arguments #1095

merged 1 commit into from
Jan 18, 2025

Conversation

sigoden
Copy link
Owner

@sigoden sigoden commented Jan 18, 2025

Why

Because role arguments are not valuable.

The prompt needs to be carefully designed, and any changes may affect the whole system. The placeholder replacement mechanism of role arguments is not practical.

@sigoden sigoden merged commit 42eedf9 into main Jan 18, 2025
3 checks passed
@sigoden sigoden deleted the feat branch January 18, 2025 01:28
@milanglacier
Copy link

milanglacier commented Jan 18, 2025

I usually use a fixed role. I don’t find any case when the system prompt need to be dynamically generated.

I think most of the time you can just tell all the possible situations you may encounter to LLM and that should be sufficient. Instead of using variables to create dynamic system prompt.

@einarpersson
Copy link

einarpersson commented Jan 19, 2025

I have never used role args so totally fine for me that these are gone.

But:

I usually use a fixed role. I don’t find any case when the system prompt need to be dynamically generated.

I think most of the time you can just tell all the possible situations you may encounter to LLM and that should be sufficient. Instead of using variables to create dynamic system prompt.

Am I the only one who really loves true dynamic system prompts? I have made it so that my role markdown files support env variable interpolation (with envsubst) and has a associated (optional) <role-name>.sh-script which is sourced at each new message.

This enables my roles to:

  • Always have reliable and up-to-date awareness of folder contents
  • Always have up-to-date time and date
  • Have custom context injected automatically if a .aichat/context/**.md-file is present in cwd
  • ....

The sky is really the limit here, and these are of course different per role.

These things can of course be returned from function calls instead, but my experience for the past months is that my approach decreases the risk of the LLM jumping to conclusions (eg regarding a file-path). Often the thing that is so hard (for LLMs or humans) is to know what information is even relevant and being blind to erroneous assumptions. It also makes the whole experience more smooth. And decreases the amount of tokens used as not all tool definitions have to be made accessible to the LLM.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants