Replies: 1 comment 4 replies
-
@ravitemer wow this is amazing! Thank you for sharing! One question I have is how are you able to keep under token limits? It seems like I cannot do more than around 6,000 tokens at once or I get a plenary error. And this often is used up by a single medium size file. How are you able to include the full context of all your open buffers? Thanks! EDIT: Update to my comment above! Now I can get a much higher context! The issue was a weird windows quirk and the plenary / curl api. See here for details: #427 (comment) |
Beta Was this translation helpful? Give feedback.
4 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I've been using CodeCompanion and implemented a way to include custom instructions like copilot-instructions.md and current buffer contexts in conversations with the LLM.
This has been super helpful with the
@full_stack_dev
(or I call@takeover
) agent for generating code relevant to our project's guidelines and structure.I'd like to share my approach and discuss potential improvements or official support for these features.
Current Implementation
I've extended the adapter's
form_messages
handler to inject two types of system messages:.github/copilot-instructions.md
in the project rootCustom Instructions
I'm reading project-specific instructions from
.github/copilot-instructions.md
in the project root or parent directories:Buffer Context
I maintain an up-to-date list of active buffers and their content:
Message Handling
I'm extending the Copilot adapter to include these contexts:
To avoid duplicates, I check for existing context messages and update them:
Questions
I'd appreciate any feedback on improving this implementation or guidance on a better approach that aligns with CodeCompanion's architecture.
Beta Was this translation helpful? Give feedback.
All reactions