Replies: 4 comments
-
Thank you for bringing this up. I don't see how this poses a greater risk than blindly copying commands from articles, documentation, or Stack Overflow. The vulnerability lies with the user's failure to review the command before executing it, not with the tool that suggests it. Similar tools already exist in the IDE world, such as GitHub Copilot and other completion or assistance tools. These tools essentially do the same thing, converting natural language into code (which theoretically might delete the OS, but this has never happened to anyone). And of course, I would never run it on business-critical infrastructure. The main use case for me personally is to run it on my development machine. PS. 2FA activated 🙂. |
Beta Was this translation helpful? Give feedback.
-
This is a concern, but I agree it's the user's responsibility. I think shell_gpt can address it in a few ways:
|
Beta Was this translation helpful? Give feedback.
-
Another question is how to build a feedback mechanism to let users (and OpenAI perhaps) know a particular answer from GPT is incorrect ("flag as incorrect"). And quite likely this mechanism itself is more vulnerable to attack than GPT. |
Beta Was this translation helpful? Give feedback.
-
I am not very concerned. So far it hasn't appeared to attempt anything hostile or untoward. Maybe a security checker automatically ran over any code it generates? This would hopefully pick up on known exploit vectors. I'm working on giving it full control over my system so I don't have to administrate installing software or other debugging tasks #108 . |
Beta Was this translation helpful? Give feedback.
-
Not be an alarmist, but projects like this seem like they could create a new type of security vector.
While, it seems like this project isn't going to cause ChatGPT to go rogue and start hacking servers because it got caught in some cyber-dystopian hallucination and decided to take advantage of the knowledge it has about Apache server vulnerabilities, there could be future projects that do.
I am curious if others have thought about this problem and how could we safeguard against them? (Beyond just printing out what ChatGPT outputted and asking Y/N?)
Also, may I request that the maintainers ensure they have 2FA turned on in their Pypi accounts? :)
Cheers... Ian
16 votes ·
Beta Was this translation helpful? Give feedback.
All reactions