Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[docs] revamp documentation #17

Merged
merged 99 commits into from
Nov 18, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
99 commits
Select commit Hold shift + click to select a range
10084f5
misc updates
tscholak Oct 22, 2024
91cd526
revamp landing page
tscholak Oct 22, 2024
df5e09d
add about us section
tscholak Oct 22, 2024
585bb89
add developers corner
tscholak Oct 22, 2024
c6c2a29
Merge branch 'main' of github.com:ServiceNow/Fast-LLM into tscholak/i…
tscholak Oct 22, 2024
62a3b22
add docs README
tscholak Oct 22, 2024
257ba2d
improve landing page
tscholak Oct 22, 2024
a99d56d
improve landing page
tscholak Oct 22, 2024
dc8e1e6
improve landing page
tscholak Oct 22, 2024
4b47d51
add cost-efficiency comparison
tscholak Oct 22, 2024
fb7d3cc
refinements
tscholak Oct 22, 2024
91262bb
refinements
tscholak Oct 22, 2024
293d3b6
refinements
tscholak Oct 23, 2024
efde9b1
refinements
tscholak Oct 23, 2024
74b8ea7
linting
tscholak Oct 23, 2024
82adb54
rework cost efficiency comparison
tscholak Oct 23, 2024
f2beb4c
rework cost efficiency comparison
tscholak Oct 23, 2024
9a2397d
add devenv
tscholak Oct 23, 2024
e4230ea
add devenv
tscholak Oct 23, 2024
a1fa251
revamp structure
tscholak Oct 23, 2024
8903632
Merge remote-tracking branch 'origin/main' into tscholak/improve-docs
tscholak Oct 23, 2024
9f887c1
add quick-start guide
tscholak Oct 26, 2024
4f40ba9
Merge branch 'main' of github.com:ServiceNow/Fast-LLM into tscholak/i…
tscholak Oct 26, 2024
0ab5b62
add prepare-dataset script
tscholak Oct 27, 2024
79b8401
rewrite quick-start guide
tscholak Oct 27, 2024
8b6ef7b
rewrite quick-start guide
tscholak Oct 27, 2024
e04d5eb
add support for distributed data preparation
tscholak Oct 27, 2024
4f68378
rewrite quick-start guide
tscholak Oct 27, 2024
5cb0754
add help page
tscholak Oct 27, 2024
7b93dce
add help page
tscholak Oct 28, 2024
b79129c
add starcoder2 success story
tscholak Oct 28, 2024
4369066
add starcoder2 success story
tscholak Oct 28, 2024
2262c65
rewrite quick-start guide
tscholak Oct 28, 2024
60cc57a
add disclaimer
tscholak Oct 29, 2024
6a76020
add build instructions
tscholak Oct 29, 2024
13e29bc
Update README.md
hughesthe1st Oct 30, 2024
f72ac0e
Update index.md
hughesthe1st Oct 31, 2024
5a43ed5
Update index.md
hughesthe1st Oct 31, 2024
bff6506
Update index.md
hughesthe1st Oct 31, 2024
271d9d1
Update index.md
hughesthe1st Oct 31, 2024
7c9bc15
Merge remote-tracking branch 'origin/main' into tscholak/improve-docs
tscholak Oct 31, 2024
8216e58
Merge branch 'tscholak/improve-docs' of github.com:ServiceNow/Fast-LL…
tscholak Oct 31, 2024
c0b8959
remove unused blog
tscholak Oct 31, 2024
431eefa
add markdownlint
tscholak Oct 31, 2024
f11bbdf
add markdownlint
tscholak Oct 31, 2024
045dcca
Merge remote-tracking branch 'origin/main' into tscholak/improve-docs
tscholak Nov 3, 2024
43869e5
separate md linting for / and /docs
tscholak Nov 3, 2024
dd35c50
wip
tscholak Nov 5, 2024
f6e163f
wip
tscholak Nov 6, 2024
8acb4f3
wip
tscholak Nov 7, 2024
1ea3422
wip
tscholak Nov 7, 2024
f3eb0d6
wip
tscholak Nov 7, 2024
127cd43
wip
tscholak Nov 8, 2024
c382298
add datasets as dependency
tscholak Nov 8, 2024
8b49dfc
Merge remote-tracking branch 'origin/main' into tscholak/improve-docs
tscholak Nov 9, 2024
7304119
fix GPTMemmapDataset
tscholak Nov 9, 2024
47d453b
fix GPTMemmapDataset
tscholak Nov 9, 2024
bef3a72
add prepare-dataset command
tscholak Nov 10, 2024
0ffc75c
add prepare-dataset command
tscholak Nov 10, 2024
fda6386
add prepare-dataset command
tscholak Nov 10, 2024
acae7d9
add prepare-dataset command
tscholak Nov 10, 2024
eb7da59
add prepare-dataset command
tscholak Nov 10, 2024
b5ed2f0
add prepare-dataset command
tscholak Nov 10, 2024
c8f746a
only push latest tag for commits to main
tscholak Nov 10, 2024
0f80b76
add V100
tscholak Nov 10, 2024
e0f813c
use older generics syntax
tscholak Nov 10, 2024
b88c9d3
remove user and install Fast-LLM globally
tscholak Nov 10, 2024
4df12d9
simplify Dockerfile
tscholak Nov 11, 2024
3c5d4d9
wip
tscholak Nov 11, 2024
54af690
Merge remote-tracking branch 'origin/tscholak/prepare-dataset' into t…
tscholak Nov 11, 2024
3737bc0
improvements
tscholak Nov 11, 2024
4b6b195
add docstring
tscholak Nov 11, 2024
52a6f0b
use full imports
tscholak Nov 11, 2024
55b0b88
use full imports
tscholak Nov 11, 2024
1f975d2
use full imports
tscholak Nov 11, 2024
b665e91
don't load tokenizer during validatin
tscholak Nov 11, 2024
af1439e
Merge remote-tracking branch 'origin/main' into tscholak/prepare-dataset
tscholak Nov 11, 2024
e51677f
simplify
tscholak Nov 12, 2024
1f447bb
simplify
tscholak Nov 12, 2024
fb50c13
address comments
tscholak Nov 12, 2024
33067c8
address comments
tscholak Nov 12, 2024
dbc221c
address comments
tscholak Nov 12, 2024
a2ae051
address comments
tscholak Nov 12, 2024
5107302
Merge remote-tracking branch 'origin/tscholak/prepare-dataset' into t…
tscholak Nov 12, 2024
d68ce82
fix link
tscholak Nov 12, 2024
b2675a3
resolve merge conflicts
tscholak Nov 13, 2024
2fad03c
clean up
tscholak Nov 13, 2024
223bab0
clean up
tscholak Nov 13, 2024
94008ea
wip
tscholak Nov 13, 2024
9706971
Merge branch 'main' of github.com:ServiceNow/Fast-LLM into tscholak/i…
tscholak Nov 13, 2024
763d843
update dependencies
tscholak Nov 13, 2024
af3f1f0
wip
tscholak Nov 14, 2024
cc6ae8b
revert changes
tscholak Nov 14, 2024
e96c411
wip
tscholak Nov 14, 2024
77c7416
wip
tscholak Nov 14, 2024
e8e9aea
wip
tscholak Nov 15, 2024
e697eeb
wip
tscholak Nov 15, 2024
8ec8ade
update configs
tscholak Nov 15, 2024
7d0cf49
remove downloads correctly
tscholak Nov 15, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .github/ISSUE_TEMPLATE/bug_report.md
Original file line number Diff line number Diff line change
Expand Up @@ -108,6 +108,7 @@ echo "=== END OF ENVIRONMENT INFORMATION ==="
# πŸ“ Additional Context

Include any other information that may help us understand the issue, such as:

- Recent changes to the configuration or code.
- Whether the issue occurs consistently or intermittently.
- Any troubleshooting steps you have already tried.
21 changes: 13 additions & 8 deletions .github/PULL_REQUEST_TEMPLATE.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,40 +25,45 @@ List the key changes introduced in this PR:
1. Change A
2. Change B

# βœ… Checklist
## βœ… Checklist

Make sure the following tasks are completed before submitting the PR:

### General:
- [ ] πŸ“œ I have read and followed the [contributing guidelines](CONTRIBUTING.md).
### General

- [ ] πŸ“œ I have read and followed the [contributing guidelines](https://servicenow.github.io/Fast-LLM/developers/contributing).
- [ ] 🏷️ I am using a clear and descriptive PR title that summarizes the key change or feature introduced.
- [ ] πŸŽ‰ The functionality is complete, and I have tested the changes.
- [ ] πŸ“ I have updated the documentation if needed.
- [ ] ⚠️ The change does not introduce any new issues (e.g., runtime warnings, type checker errors, linting problems, unhandled edge cases).
- [ ] 🧩 I have commented my code, especially in hard-to-understand areas.

### Dependencies and Configuration:
### Dependencies and Configuration

- [ ] πŸ‹ I have updated the Docker configuration or dependencies, if applicable.
- [ ] πŸ”„ I have ensured compatibility with the existing setup after dependency changes.

### Testing:
### Testing

- [ ] πŸ§ͺ I have added or updated tests to cover my changes.
- [ ] βœ”οΈ New and existing tests pass locally with my changes.
- [ ] 🚦 I have tested these changes on GPUs and verified training stability.
- [ ] πŸ‹οΈ I have tested the changes on realistic training workloads, if applicable.

### Performance Impact:
### Performance Impact

- [ ] πŸ“Š I have run benchmarks where applicable to evaluate the performance impact.
- [ ] βœ… The benchmarks show no performance regression.
- [ ] πŸš€ The benchmarks indicate a potential performance improvement.
- [ ] ⚠️ The benchmarks indicate a potential performance degradation.
- [ ] πŸ“ˆ I have provided benchmark results and detailed any performance impact below, if applicable.

# πŸ“Š Performance Impact Details
## πŸ“Š Performance Impact Details

If there is any impact on performance, describe it and provide benchmark results, if applicable:

---

# πŸ“ Additional Notes
## πŸ—’οΈ Additional Notes

Include any additional context, information, or considerations here, such as known issues, follow-up tasks, or backward compatibility concerns.
9 changes: 9 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@ __pycache__/

# Doc build
.cache
site

# Distribution / packaging
*.egg-info/
Expand All @@ -27,3 +28,11 @@ venv.bak/
# Project specifics
/.idea/
/.vscode/

# Devenv
.devenv*
devenv.local.nix
devenv.*

# direnv
.direnv
35 changes: 35 additions & 0 deletions .markdownlint.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
# See https://github.com/DavidAnson/markdownlint/blob/v0.32.1/schema/.markdownlint.yaml for schema documentation

# Default state for all rules
default: true

# MD007/ul-indent : Unordered list indentation : https://github.com/DavidAnson/markdownlint/blob/v0.32.1/doc/md007.md
MD007:
# Spaces for indent
indent: 2

# MD010/no-hard-tabs : Hard tabs : https://github.com/DavidAnson/markdownlint/blob/v0.32.1/doc/md010.md
MD010:
# Include code blocks
code_blocks: false
# Fenced code languages to ignore
ignore_code_languages: []
# Number of spaces for each hard tab
spaces_per_tab: 2

# MD013/line-length : Line length : https://github.com/DavidAnson/markdownlint/blob/v0.32.1/doc/md013.md
MD013: false

# MD024/no-duplicate-heading : Multiple headings with the same content : https://github.com/DavidAnson/markdownlint/blob/v0.32.1/doc/md024.md
MD024: false

# MD030/list-marker-space : Spaces after list markers : https://github.com/DavidAnson/markdownlint/blob/v0.32.1/doc/md030.md
MD030:
# Spaces for single-line unordered list items
ul_single: 1
# Spaces for single-line ordered list items
ol_single: 1
# Spaces for multi-line unordered list items
ul_multi: 1
# Spaces for multi-line ordered list items
ol_multi: 1
4 changes: 4 additions & 0 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -48,3 +48,7 @@ repos:
args:
- "--config"
- "./pyproject.toml"
- repo: https://github.com/markdownlint/markdownlint
rev: v0.11.0
hooks:
- id: markdownlint
28 changes: 14 additions & 14 deletions CODE_OF_CONDUCT.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
### ServiceNow Open Source Code-of-Conduct
# ServiceNow Open Source Code-of-Conduct

This code of conduct provides guidelines for participation in ServiceNow-managed open-source communities and projects.

**Discussion forum guidelines**
## Discussion forum guidelines

Communities thrive when members support each other and provide useful feedback.

Expand All @@ -11,12 +11,12 @@ Communities thrive when members support each other and provide useful feedback.
- User Contributions must not include material that is defamatory, obscene, indecent, abusive, offensive, harassing, violent, hateful, inflammatory or otherwise objectionable.
- Lively and collegial discussions are always encouraged in a healthy community. It is okay to argue facts but not okay to argue personalities or personal beliefs.
- Do not use text formats such as all caps or bold that may be read as annoying, rude or send a strong message.
- Do not publish anyone’s private personal information without their explicit consent.
- Do not publish anyone's private personal information without their explicit consent.
- Avoid using abbreviations or terminology that others may not understand. An abbreviation may mean something to you but in another context or country, it may have another meaning.
- Be accountable for your actions by correcting your mistakes and indicating where you have changed a previous post of yours.
- Mark content as correct and helpful, and provide feedback. If you read a discussion post that you find helpful, we encourage you to leave a positive vote and comment in the replies. If you find a post that is unhelpful, please provide more information in the issue comments.

**Issue board guidelines**
## Issue board guidelines

Many open-source projects provide an Issues board, with similar functionality to a Discussions forum. The same rules from the discussion forum guidelines apply to the Issues board.

Expand All @@ -25,22 +25,22 @@ ServiceNow suggests the following technical support pathways for open-source pro
1. Clearly identify and document the issue or question you have.
2. View the Documentation.
3. Search the Discussions.
4. Search the project knowledge base or Wiki for known errors, useful solutions, and troubleshooting tips.
5. Check the project guidelines in the [`CONTRIBUTING.md`](CONTRIBUTING.md) file if you would like details on how you can submit a change. Community contributions are valued and appreciated!
6. Log an Issue if it hasn’t already been logged. If the issue has already been logged by another user, vote it up, and add a comment with additional or missing information. Do your best to choose the correct category when logging a new issue. This will make it easier to differentiate bugs from new feature requests or ideas. If after logging an issue you find the solution, please close your issue and provide a comment with the solution. This will help the project owners and other users.
4. Search the project documentation for known errors, useful solutions, and troubleshooting tips.
5. Check the project contribution guidelines if you would like details on how you can submit a change. Community contributions are valued and appreciated!
6. Log an Issue if it hasn't already been logged. If the issue has already been logged by another user, vote it up, and add a comment with additional or missing information. Do your best to choose the correct category when logging a new issue. This will make it easier to differentiate bugs from new feature requests or ideas. If after logging an issue you find the solution, please close your issue and provide a comment with the solution. This will help the project owners and other users.
7. Contact the project team contributors of the project to see if they can help as a last resort only.

**Repositories**
## Repositories

- Read and follow the license instructions
- Remember to include citations if you use someone else’s work in your own project. Use the [`CITATION.cff`](CITATION.cff) to find the correct project citation reference.
- β€˜Star’ project repos to save for future reference.
- β€˜Watch’ project repos to get notifications of changes – this can get noisy for some projects, so only watch the ones you really need to track closely.
- Remember to include citations if you use someone else's work in your own project. Use the [`CITATION.cff`](CITATION.cff) to find the correct project citation reference.
- β€˜Star' project repos to save for future reference.
- β€˜Watch' project repos to get notifications of changes – this can get noisy for some projects, so only watch the ones you really need to track closely.

**Enforcement and reporting**
## Enforcement and reporting

We encourage community members and users to help each other and to resolve issues amongst themselves as much as possible. If a matter cannot be resolved in good faith within the means available, please reach out to a team member or email fast-llm-team@servicenow.com.
We encourage community members and users to help each other and to resolve issues amongst themselves as much as possible. If a matter cannot be resolved in good faith within the means available, please reach out to a team member or email [fast-llm-team@servicenow.com](mailto:fast-llm-team@servicenow.com).

**ServiceNow Disclaimer.**
## ServiceNow Disclaimer

We may, but are under no obligation to, monitor or censor comments made by users or content provided by contributors and we are not responsible for the accuracy, completeness, appropriateness or legality of anything posted, depicted or otherwise provided by third‑party users and we disclaim any and all liability relating thereto.
63 changes: 2 additions & 61 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
@@ -1,62 +1,3 @@
# Contributing to Fast-LLM πŸš€
# Contributing to Fast-LLM

Thank you for your interest in contributing to Fast-LLM! We're thrilled to have you here, and your support is invaluable in helping us accelerate LLM training to full speed. This guide will walk you through the steps to contribute, from reporting issues to submitting changes and setting up your development environment.

If you have questions or want to start a discussion, feel free to [open a discussion](https://github.com/ServiceNow/Fast-LLM/discussions) on our GitHub page.

## Getting Started

To get started with contributing to Fast-LLM, follow these steps to set up your environment:

1. **Set Up the Development Environment**: Fast-LLM is built on [PyTorch](https://pytorch.org/) and [Triton](https://triton-lang.org/). Check out our [setup guide](https://servicenow.github.io/Fast-LLM/development/setup) for instructions on getting everything ready, including the development environment and dependencies.
2. **Learn Our Best Practices**: Get familiar with our [development best practices](https://servicenow.github.io/Fast-LLM/development/dev-practices/), which cover code style, pre-commit hooks, and testing strategies.
3. **Launch Fast-LLM Locally or with Docker**: Need help getting started? Follow the instructions in the [launching section](https://servicenow.github.io/Fast-LLM/development/launching) to get Fast-LLM up and running.

## How to Report a Bug 🐞

Found a bug? Let's squash it together! [Open an issue](https://github.com/ServiceNow/Fast-LLM/issues/new/choose) and select "Bug report." Please include as much information as possible:

- Steps to reproduce the issue.
- What you expected to happen versus what actually happened.
- Logs, Fast-LLM configuration, and error messages.
- Details about your environment setup (e.g., CUDA hardware, PyTorch version, CUDA version).

If you're familiar with the codebase, consider adding a failing unit test to demonstrate the problem (optional, but helpful!).

## Proposing Changes

Before diving into code, [open an issue](https://github.com/ServiceNow/Fast-LLM/issues) to discuss your proposal. This is especially important if you're planning significant changes or adding new dependencies. Once your idea is approved, follow these steps:

1. **Fork the Repository**: [Fork Fast-LLM](https://github.com/ServiceNow/Fast-LLM/fork) to your own GitHub account.
2. **Clone Your Fork Locally**: Use `git clone` to bring the code to your local machine.
3. **Create a New Branch**: Name your branch descriptively, such as `feature/awesome-feature` or `fix/nasty-bug`.
4. **Make Your Changes**: Work your magic! Don't forget to add or update tests, benchmarks, or configurations as needed.
5. **Create a Properly Titled Pull Request**: When you're ready to open a PR, make sure to use a clear and descriptive title that follows our [PR title guidelines](https://servicenow.github.io/Fast-LLM/development/pr-title-guidelines). This title will become the commit message for the squashed merge.
6. **Push to Your Fork**: Push the branch to your GitHub fork.
7. **Open a Pull Request**: [Submit a pull request](https://github.com/ServiceNow/Fast-LLM/compare) to the `main` branch. Reference the original issue number and provide a brief summary of your changes.

### Guidelines for a Successful Pull Request

Here are some tips to ensure your pull request gets reviewed and merged promptly:

- **Follow our coding standards**: Stick to our [development best practices](https://servicenow.github.io/Fast-LLM/development/dev-practices/) to keep the code clean and consistent.
- **Write tests**: Verify your changes with unit tests for new features or bug fixes.
- **Test on GPUs and real-world workloads**: Since Fast-LLM is all about training large language models, make sure your changes work smoothly in GPU environments and on typical training setups.
- **Run benchmarks and performance tests**: Make sure your changes don't slow things down. If there's any impact on performance, provide benchmark results to back it up.
- **Avoid introducing new issues**: Check that there are no new runtime warnings, type checker errors, linting problems, or unhandled edge cases.
- **Comment non-trivial code**: Make your code easy to understand for others.
- **Keep sensitive data out**: Make sure your code or commit messages don't expose private or proprietary information.
- **Use the [PR template](https://github.com/ServiceNow/Fast-LLM/blob/main/.github/PULL_REQUEST_TEMPLATE.md)**: Complete the checklist to make sure everything is in order before hitting submit.

## Seeking Help or Clarification

If you're unsure about something or need help, you've got options:

- **GitHub Discussions**: [Start a discussion](https://github.com/ServiceNow/Fast-LLM/discussions) if you need advice or just want to chat.
- **Project Maintainers**: Mention a maintainer in an issue or pull request if you need a review or guidance.

## Contributors

We're grateful for all the awesome contributors who help make Fast-LLM better. Join our contributors' list and make your first contribution!

To learn more about the team and maintainers, visit our [About page](https://servicenow.github.io/Fast-LLM/about-us/).
Please refer to the [contributing guidelines](https://servicenow.github.io/Fast-LLM/developers/contributing) for more information on how to contribute to Fast-LLM.
10 changes: 7 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,11 @@ Made with ❀️ by [ServiceNow Research][servicenow-research]

## Overview

Fast-LLM is a new open-source library for training large language models, built on [PyTorch][pytorch] and [Triton][triton]. It is extremely fast, scales to large clusters, supports a wide range of model architectures, and is easy to use. Unlike commercial frameworks like Megatron-LM, which are largely closed off and fragmented across forks, Fast-LLM is fully open-source and encourages community-driven development. Researchers can freely customize and optimize as needed, making it a flexible and hackable alternative that combines the speed of specialized tools with the openness of libraries like [Hugging Face Transformers][transformers].
Fast-LLM is a cutting-edge open-source library for training large language models with exceptional speed, scalability, and flexibility. Built on [PyTorch][pytorch] and [Triton][triton], Fast-LLM empowers AI teams to push the limits of generative AI, from research to production.

Optimized for training models of all sizesβ€”from small 1B-parameter models to massive clusters with 70B+ parametersβ€”Fast-LLM delivers faster training, lower costs, and seamless scalability. Its fine-tuned kernels, advanced parallelism techniques, and efficient memory management make it the go-to choice for diverse training needs.

As a truly open-source project, Fast-LLM allows full customization and extension without proprietary restrictions. Developed transparently by a community of professionals on GitHub, the library benefits from collaborative innovation, with every change discussed and reviewed in the open to ensure trust and quality. Fast-LLM combines professional-grade tools with unified support for GPT-like architectures, offering the cost efficiency and flexibility that serious AI practitioners demand.

> [!NOTE]
> Fast-LLM is not affiliated with Fast.AI, FastHTML, FastAPI, FastText, or other similarly named projects. Our library's name refers to its speed and efficiency in language model training.
Expand All @@ -25,7 +29,7 @@ Fast-LLM is a new open-source library for training large language models, built
- ⚑️ Optimized kernel efficiency and reduced overheads.
- πŸ”‹ Optimized memory usage for best performance.
- ⏳ Minimizes training time and cost.

2. πŸ“ˆ **Fast-LLM is Highly Scalable**:
- πŸ“‘ Distributed training across multiple GPUs and nodes using 3D parallelism (Data, Tensor, and Pipeline).
- πŸ”— Supports sequence length parallelism to handle longer sequences effectively.
Expand All @@ -49,7 +53,7 @@ Fast-LLM is a new open-source library for training large language models, built

5. 🌐 **Fast-LLM is Truly Open Source**:
- βš–οΈ Licensed under [Apache 2.0][license] for maximum freedom to use Fast-LLM at work, in your projects, or for research.
- πŸ’» Fully developed on GitHub with a public [roadmap][roadmap] and transparent [issue tracking][issues].
- πŸ’» Transparently developed on GitHub with public [roadmap][roadmap] and [issue tracking][issues].
- 🀝 Contributions and collaboration are always welcome!

## Usage
Expand Down
2 changes: 1 addition & 1 deletion SECURITY.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ If you find a vulnerability in ServiceNow systems, products, or network infrastr
If you find a vulnerability in this open-source project published by the ServiceNow Research team, please email [servicenow-research@servicenow.com](mailto:servicenow-research@servicenow.com) to report your findings.

We will process your report as soon as possible, depending on the severity of your report. We appreciate everyone's help in disclosing vulnerabilities in a responsible manner.

## Guidelines

Please follow the guidelines below when [disclosing vulnerabilities](https://www.servicenow.com/company/trust/privacy/responsible-disclosure.html):
Expand Down
Loading