Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Exploits and malware policy updates #397
Exploits and malware policy updates #397
Changes from 2 commits
f220679
dc55f21
e328034
e31711b
65cd592
21ca446
55fd371
956f0f6
9cc8c70
f0740c7
e14a27f
6d87bd5
File filter
Filter by extension
Conversations
Jump to
There are no files selected for viewing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This statement should not be modified from the original.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed. What "active attack" does even mean? Some viruses from the 90s are still attacking the few vulnerable devices that remain (like old industrial machines). Can it be considered "active attack"?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If your goal is to clarify, removing specifics and replacing with vague handwaving doesn't do it. Using github asa command-and-control system is a very specific example where it should be clear when somehow has violated the rule. But "support of ongoing and active attacks" is a vague catchall that's impossible to determine if somebody has violated. Hackers have already automated download of my code in their attacks, meaning that I'm violating the new rules technically. I wouldn't get canceled/censored, but those who you do choose to cancel/censor would not become an arbitrary and prejudicial decision, not one based upon facts.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How will you be the arbiters of what is, and is not causing harm? What will the threshold be? My local coffee shop (with online ordering) might see a simple exploit kit as something that could ruin their business completely, while my bank is likely to have an appropriate D&D strategy to defend against this.
When considering on-going attack, is there some sort of limitation, or is the expectation that GitHub will maintain it's own threat intelligence and make decisions on this basis ad-infinitum?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
MSFT intentions are quite clear here, first they deleted the proof of concept for exchange, then now this policy.
Anything like Responder, CrackMapExec, Empire,etc can be banned tomorrow, because it's inconvenient for MSFT to have tools exploiting 35 years old vulnerabilities they don't want to patch removed from Github. When they bought Github, it was clear to me that it was not to promote open-source -they always complained about it and argued against it- but to control what kind of code is convenient, and which kind of code should be online and which not.
In anyways, it wont have any effects on malware, virus, spyware, ransomware, state sponsored malware, as they are all closed source ;)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
BTW, as an owner of the resource M$ ₲H has a right to block any user (even the one that even have never violated the rules) and repo from it on its own discretion (including criteria, like being a person whose political position and/or activity (i. e. being a proponent of free software or being a Stallman sympathizer or devsloping free software competing to M$ one or just not using Windows) is potentially harmful to the business in a long term, being not enough profitable ("free-rider") or being one of a specific nationality/religion/geographical location/political orientation/sexual orientation/induism caste/prison caste/occupation/age/employer/luck level (at random) ) without disclosing the actual criteria of discrimination. So de-facto the proposed terms are already active, just were not codified.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a good change only in that the original text said:
This old text completely forbids the publication of malware and exploits via GitHub. On one hand this is a bad policy, as "malware and exploits" is a broad term that covers professional penetration testing tools such as Metasploit Framework. On the other hand, the old text was clearly not being enforced or policed. In light of the recent ProxyLogin incident and this policy change being put forth, we can expect that any policy (Either the old one, or a new one) will be more actively enforced.
The change to the text does not go far enough in fixing this issue, as raised by others. In my opinion, at the very least the proposed policy must be updated to provide clarity on:
For what it's worth, I am in support of a policy that forbids the use of GitHub as a direct malware delivery mechanism (i.e. a CDN) or as a malware command and control mechanism. In my opinion, GitHub should generally permit collaboration on and publication of exploit code and malware, and the original text as it relates to this activity should simply be struck from the policy entirely.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The "old" text disallowed to host on GH files that will infect users when a web page on GH is opened or when a repo is cloned. The proposed text forbids any code that can be potentially used in attacks, i.e. if some malware infects some machine, then downloads source code of, i.e. hashcat, builds it and then uses it to brute hashes on infected servers to advance attacks, then hashcat is forbidden too.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The full text was:
I disagree with your reading of the old text. Take for example Metasploit Framework. IMO it "contains" "active malware [and] exploits".
Granted I'm not a lawyer. If you aren't either, all we have is our own understandings of the text.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's worth considering some cases here. If we take GH at it's word that it's trying to protect the good pentesting and research tools, then the language should be clear to allow both of these:
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Tools written by pen tester and infosec research firms can and are abused.
Is there any provision for tools that are not inherently malicious that are being used for malicious purposes?
I think this clause is exceedingly problematic when one considers that many attackers conduct "living off the land" attacks. This clause could be read such that any application -- including those that are part of the core OS install -- would be at risk of removal.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't understand the objections of anybody above me in this thread.
I can see the argument that it should be okay to use GitHub to host even malware that's actively being used, and that as such this clause was and still is problematic. But what I'm not seeing is why you object to this change, per se.
Before, the terms banned hosting "active malware". Now they ban hosting "malware ... [that is] in support of ongoing and active attacks that are causing harm". Certainly there are plenty of vague terms in there, but no matter how we parse them, isn't the new wording strictly more liberal than the previous wording? What's an example of malware that the objectors would characterise as "in support of ongoing and active attacks" and yet would not characterise as "active malware"?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My understanding is that "active malware" meant things like a browser exploit in the html hosted on github pages. This change makes the policy apply to code that can be used anywhere, not just from GH's infrastructure.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"in support of ongoing and active attacks that are causing harm" is so vague that it is impossible to enforce fairly. I agree with @robertdavidgraham, replacing a specific rule (no C2) with a vague one does not clarify the policies at all.
The majority of Windows attacks abuse PowerShell in some way. Is https://github.com/PowerShell/PowerShell now in breach of this policy because it supports active attacks?
A vast amount of security tools are dual-use and can be abused by attackers while also being valuable to defenders. Without clear-cut guidelines there is a significant risk that defenders will be heavily penalised by arbitrary decisions with limited impact to attackers.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is also a gray area. What is considered personal information - is a single mailbox or name considered personal information?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
https://github.com/gelstudios/gitfiti
Will this block commit history modification and git commit art?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It certainly would! That's automated and inauthentic activity.Edit: It seems that GitHub only selectively enforces this. You probably won't get in trouble unless you routinely abuse it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is actually not a new statement. If you notice, GitHub moved it up from bullet point 9 to bullet point 4, but the same provision exists.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IMO, this should remain. Github has the right to defend their systems from abuse conditions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it was never a abuse."Security researchers do not "support ongoing and active attacks that are causing harm", they instead raise awareness of those risks and help the concerned parties defend against them"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
in the event github removes content (for the reasons above or any reason for that matter) what is the expectation to the end user? is it just... gone forever? is it potential to recover the data and move it somewhere else? do we get a warning or anything? deprecation?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would this include the use of PowerShell download cradles? If so, this clause would expose many PowerShell tools to removal... including those which are not 'malicious'.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In my understanding @besimorhino such PowerShell download cradles would only be in scope under the proposed language if they were specifically hosted in support of unlawful attack or malware campaigns, if they have a dual-use (e.g. network penetration testing tooling, or security/malware research) established prior to any abuse reports associated to such an unlawful attack they would not be affected.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A large percentage of open source security tooling hosted on GitHub is designed to help with security audits, pen testing, red teaming, etc. It is used for research into network and operational vulnerabilities, rather than software vulnerabilities. It would be good to ensure this category is included in this list of "beneficial to the security community".
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree with Lee. Many repositories host/archive malicious phishing kits/sites, binaries, and code (e.g. metasploit) that are used for educational and unfortunately are used for malicious intent.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This and the following line imply that a repository must use Markdown. Not all users wish to use Markdown for their repositories, and any suitable file format should be acceptable.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This clause would likely be better received by the security community if the same standards were applied to all code. There's plenty of sys admin scripts that can completely ruin your systems... is there a similar restriction placed on them? The placement of the clause confuses me. Is it for everyone, or just code that could be problematic?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed - what the suggested change does is pushes responsibility onto the commiter whilst providing no clear definition of "potentially harmful".
rm
is potentially harmful, but presumably wouldn't need to declare it.There are Nessus plugins that are potentially harmful if run against a production box, but they're not intended to do harm. Would they need to put a disclaimer in?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As publishing exploits can be a legal grey area, in certain situations someone may wish to publish anonymously (e.g. whistle blowing, forcing a fix with public disclosure). Requiring a contact opens an avenue for legal action against researchers.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This will simply result in an army of strawmen appearing as "security contacts" and nothing more. GitHub has all the means to get in touch with the user who published the content, why introduce this measure? Who exactly does this serve the most and which problem does this solve? Please explain how this possibly personal information will be used and why the currently existing means are not sufficient.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This change essentially mandates that people publish PII somewhere it can trivially be scraped.
That seems like a really, really poor policy change. As @dev-zzo says, GH already has all that's needed to make contact (as well as the ability to take stuff down).
This feels like an example of really, really bad practice - this isn't an organisation exposing services (i.e. the use-case for
.well-known/security.txt
)There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A
SECURITY.md
security policy seems positioned to identify how a project wishes to receive responsible disclosures. What would an exampleSECURITY.md
look like for a project documenting a proof of concept?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
More so what would a security.md look like for projects like metasploit that are actually used for malicious intent.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Once a 0day is published, have take downs been shown effective in mitigating spread? My assumption would be that malicious actors have access to it via personal networks at that point. The take down likely just draws more attention to it. Recommend to leave things up.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not only that, the take down also hinders detection and response teams. Look, the malware is going to exist and is going to be deployed. If we decide to remove it from public eye, we're also choosing to remove our capability to quickly respond.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Depends on how operationalized/weaponizable the exploit is, and how easily it can fit into the attacker's toolkits and operational tempo.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a large part of what informed the current language in this policy. Take downs tend to just push explot/0day code into less-public and less-accessible spaces, where defenders and researchers are less likely to see them (and, thus, less likely to be able to create patches or fixes).
Ease of access by researchers becomes more, not less, important when an exploit is being actively exploited.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this sentence might go against that spirit -
However, GitHub may restrict content if we determine that it still poses a risk where we receive active abuse reports and maintainers are working toward resolution.
Companies will want 0day or something being actively exploited (read: hurting their reputation) taken down and will push hard for this.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In my experience, by the time an exploit hits public repos, it has either been depleted of most of its hack value and/or spread over "the forums" and whoever wanted has got a copy already or getting it is within direct reach. Removing content does not really help curb any ongoing attacks as they are, well, already ongoing, but it does hinder or prevent future analysis and other research work. If GitHub has any statistical data that would back the contrary, please share.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The “…and maintainers are working toward resolution” guideline is vague. Which maintainers’ involvement satisfies this prong of the guidelines? The maintainers of a GitHub-hosted project providing the proof of concept? The maintainers of a closed-source project that is the target of the PoC?