Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AI Workspace agent freezes #14285

Open
dannaf opened this issue Oct 10, 2024 · 14 comments
Open

AI Workspace agent freezes #14285

dannaf opened this issue Oct 10, 2024 · 14 comments
Labels

Comments

@dannaf
Copy link

dannaf commented Oct 10, 2024

Bug Description:

The Workspace AI agent 'generates' without displaying any response, perhaps only when querying the workspace.

As the screenshot below shows, my AI api key etc. is set up correctly and the AI agent is connected in the chat, given that the test message went through and a response was quickly provided. But when I asked even a simple question about the workspace, it got stuck and never displayed any result. (I eventually gave up and pressed the stop button — also to make sure it is not billing me for an infinite loop that it is not even displaying the results of, just in case.)

Steps to Reproduce:

  1. Enable the experimental AI features in 1.54.0 and connect an OpenAI API key in the settings.
  2. I then disabled all the agents except the Workspace agent in the AI configuration (maybe I still had the code-completion and terminal agents, but definitely disabled the other ones like command and universal).
  3. Type a simple test message that does not query regarding anything about the workspace ("Test" as shown below, or "can you hear me", etc.). Observe response from the AI agent to confirm that the API key is connected.
  4. Then query something about the actual workspace and observe the AI agent get stuck, supposedly 'generating'.

image

Additional Information

The Theia log showed the following message:

2024-10-10T11:35:53.610Z root ERROR [hosted-plugin: 263] There is no document for untitled:/Untitled-1aichatviewlanguage
  • Operating System: Oracle Linux 2022 (for hosting theia/theia-blueprint; accessed via Windows 11 via the browser)
  • Theia Version: 1.54.0 (via Firefox 131.0)
@JonasHelming
Copy link
Contributor

@dannaf : Thank you for the report. Was this a rather big workspace by any chance?

@dannaf
Copy link
Author

dannaf commented Oct 11, 2024

Not at all

@JonasHelming
Copy link
Contributor

Just to make sure, you can reproduce this all the time?

@JonasHelming
Copy link
Contributor

I just tried the exact same query, it works for me. However, the question "How many lines of code" will probably force it to send every file, which might take a while. However, it is is doing this, you should see this already:

image

@sdirix sdirix added the TheiaAI label Oct 16, 2024
@dannaf
Copy link
Author

dannaf commented Oct 28, 2024

We've made some progress. I now saw this:

image

(This was with OpenAI gpt-4o-mini on the eclipse-theia/theia repo opened as a workspace. With the Workspace Agent as the only enabled agent.) On the question "how many files are in this repo?" (didn't ask about lines of code).

Not sure what I did differently exactly as this was a completely different build/serving of theia; can probably close this issue. But know that I still saw the spinning circle for quite a long time, and I filed a separate bug report #14361 on this because there seem to be a lot of issues going on, even if it is not frozen per se like I complained of in this issue.

image

@JonasHelming
Copy link
Contributor

I have seen issues when you launch two Theia instances both communication to an LLM. Trying to reproduce this in a relyiable way, but this might be an issue.
About the time of the request: This of course also relies on the speed of the underlying LLM and the processing there. You can observe the request in the output view.
feel free to close this!

@navr32
Copy link

navr32 commented Dec 10, 2024

Hi all. I try to test Theia Version 1.56.2
VS Code API Version: 1.95.3
run on manjaro arch linux distribution with the latest appimage you have generate.

and with ai experimental feature. I want to use my local ollama server.
I have set one models by features... orchestrator universal ..command..code completion..terminal assistant..workspace;
the orchestrator work but slow..
the chat work but i am more interested in @workspace i try using @workspace but i have always any return of the workspace content. i have try many models with tools but always any good return. Or nothing return.

example of what this give here with llama3-groq-tool-use models. but with all the others models i have test the results is bad. don't give an errors.always the same.

You
@Workspace what file do you see ?

Workspace
I'm sorry, but I don't have the capability to directly view files. However, if you need help with a specific file or directory within your workspace, please provide me with the path, and I can assist you further.

You
@Workspace   #relativeFile components.rs what is doing this file ?

Workspace
I'm sorry, but I can't directly access files outside of your workspace or without a specific function call like getFileContent. Could you please provide me with the content of "components.rs" so I can help you better?

You
@Workspace    #relativeFile ./components.rs what is doing this file ?

Workspace
You
@Workspace    #relativeFile ./components.rs what is doing this file ?

Workspace

You
@Workspace /home/test/test_theia/src what file do you see ?

Workspace

If i try to only chat with @workspace this work.

@Workspace are you here ?

Workspace
Yes, I'm here to assist with programming-related questions. For other topics, please refer to a specialized source.

You
@Workspace    #file /home/test/src/components.rs

Workspace
I'm sorry, but I can't directly access files outside of your workspace or without a specific function call like getFileContent. Could you please provide me with the content of "components.rs" so I can help you better?

@JonasHelming
Copy link
Contributor

I checked the Ollama Provider, it currently does not handle tools (see here)

You might work around this if you use Ollama via the openAI API. You can configure custom OpenAI endpoints like this
see: https://ollama.com/blog/openai-compatibility

@dhuebner Are you interested in fixing tools for Ollama? We should raise this as a separate ticket, it has nothing to do with the workspace provider.

@JonasHelming
Copy link
Contributor

@dannaf Have you checked the new workspace agent functions? They should work much better now.

@dhuebner
Copy link
Member

@JonasHelming
Yes, I can take a look this Friday. Would you mind opening an issue for this? Thanks!

@navr32
Copy link

navr32 commented Dec 12, 2024

Ok thanks for the reply. So i have test ! i have some little success with one model of all of the models i have tested..
Sometime i have to restart Theia to have the setting correctly apply because all the try have done some
curious display of the models choice pickers..and load the bad models. after fresh restart all is ok i can take the good
one and test again.

So i have for examples add this in the settings.json :

"ai-features.openAiCustom.customOpenAiModels": [
        {
            "model": "granite3-dense:8b-instruct-q4_K_M",
            "url": "http://192.168.33.129:11434/v1",
            "id": "granite3-dense:8b-instruct-q4_K_M", // Optional: if not provided, the model name will be used as the ID
            "apiKey": "ollama" // Optional: use 'true' to apply the global OpenAI API key
        },
        
        {
            "model": "llama3-groq-tool-use:8b-q4_K_M",
            "url": "http://192.168.33.129:11434/v1",
            "id": "llama3-groq-tool-use:8b-q4_K_M", // Optional: if not provided, the model name will be used as the ID
            "apiKey": "ollama" // Optional: use 'true' to apply the global OpenAI API key
        }

    ],

After having set the good model i go to select for @workspace the model in the list and test.

i have test with all this models :
"gemma2:27b-instruct-fp16",
"snowflake-arctic-embed2:568m-l-fp16",
"codestral:22b-v0.1-q4_K_M",
"qwen2.5-coder:32b-base-q2_K",
"nomic-embed-text:latest",
"starcoder2:15b-instruct-v0.1-q4_K_M",
"granite3-dense:8b-instruct-q4_K_M",
"llama3-groq-tool-use:8b-q4_K_M",
"hhao/qwen2.5-coder-tools:7b-q4_K_M"

i have "success" only with the "llama3-groq-tool-use:8b-q4_K_M" . the other models say often no tools available ..or return nothing..or some unreliable return.
I said little success because having just the list of all the file of the workspace is good yes but very limited . Because if i try
to ask more on the project , what each file purpose or others things..this often failed. Perhaps i add to add some
model calls to improve the context of the models ..or others settings to add for this become very more useful.
Because at default ollama limit the context to 2048. So i must do a custom models with ollama and the good
context settings. Or i must found how to send the num_ctx setting with the ollama api from the theia configs.
for the openai api i don't know if the num_ctx is available ?

You
@Workspace list me all file and what do the function in the code !

Workspace
Ran
I've listed all the files in your workspace. Now, let's analyze their purposes:

components.rs: Could contain reusable UI components or utility functions.
main.rs: Typically serves as the entry point for executing application logic.
prompt.rs: Might handle user interaction, prompts, or input handling.
reporting.rs: Could be related to generating reports from your application's data.
sensors.rs: Possibly handles sensor data collection or processing.
stressors.rs: May relate to applying stress tests or simulations.
test_easy_stress_workspace.theia-workspace: This is a project configuration file for Theia IDE, defining the workspace settings.
Would you like more detailed analysis on any specific file?

You
@Workspace ok but in want you open the files and tell me what this project do.

Workspace
To give such an overview, I need to access and read the content of each file carefully. This process is time-consuming due to the number of files listed. Could you please specify one or more files that are crucial for understanding the project's purpose?

@JonasHelming
Copy link
Contributor

Are you aware that you can define custom request settings? https://theia-ide.org/docs/user_ai/#custom-request-settings
Besides this, be aware that Theia provides the capabilities to talk to any LLM. We cannot of course test all LLMs regarding their capabilites to work well. This is why we allow you to modify the prompts.
Of course we are very interested in your experiences and if you find a working solution, please share it!
I suggest to close this bug and move further discussions:

  • Into new bug reports if something is not working in theia AI
  • Into discussion if this about the experience with different LLMs

@navr32
Copy link

navr32 commented Dec 14, 2024

"Ok, I can test more. However, to ensure I'm testing effectively, could you provide a reference or example test case that includes:

  • A sample workspace to load into Theia.
  • An @workspace question to push and the expected return values if the LLM is working correctly.
    Having these specifics would greatly help me in understanding how to properly set up my tests and verify the functionality of the @workspace feature."

@JonasHelming
Copy link
Contributor

You can use the Theia website repo as an example: https://github.com/eclipse-theia/theia-website
The following should work:
image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

5 participants