gimme_readme
is a command-line tool powered by AI that generates a comprehensive README.md
file for your project. It analyzes multiple source code files at once, providing concise explanations of each file's purpose, functionality, and key components, all in a single, easy-to-read document.
See our 0.1 Release Demo!
- Getting Started
- Usage
- Example Usage
- Supported Models by Providers
- Contributing
- Code of Conduct
- License
- Author
To get started with gimme_readme
, follow these steps:
-
Install the latest long term support version of Node.js for your operating system.
-
Run the following command to install
gimme_readme
globally:npm i -g gimme_readme
NOTE: MAC/LINUX users may need to run
sudo npm i -g gimme_readme
-
Generate a configuration file by running in any folder you'd like:
gr-ai -c
This command creates a
.gimme_readme_config
file in your home directory. Do not move this file from this location. -
Open the
.gimme_readme_config
file and add your API keys and preferred default values as prompted. Ensure you leave the variable names unchanged.- Subsequent runs of
gr-ai -c
will display the path to your existing config file. - See here for an example of what a
.gimme_readme_config
file looks like!
- Subsequent runs of
gimme_readme
uses AI to generate a README.md
file that explains a given source code file or files. Below are the available options:
Option | Description |
---|---|
-v , --version |
Output the current version |
-f , --file [files...] |
Specify one or more files to generate explanations for |
-o , --outputFile <string> |
Specify the file to output the generated README to |
-m , --model <string> |
Choose a free-tier AI model to use (e.g., gemini, openai, grok) |
-p , --prompt <string> |
Provide a custom prompt to the AI |
-pf , --promptFile <string> |
Specify a prompt file |
-c , --config |
Show the location of the configuration file and provide links to examples |
-t , --temperature <number> |
Set the level of determinism for the AI (value between 0 and 1) |
-tkn , --token |
Get information on the tokens consumed (i.e., prompt, completion, & total tokens) |
-h , --help |
Display help for the command |
Below are some simple examples to help you get started with gimme_readme
. For more detailed examples,
see here.
To display the help menu with all available commands:
gr-ai -h
To show the current version of gimme_readme
:
gr-ai -v
To generate a README.md
file for one or more source files:
gr-ai -f example.js anotherFile.py -o README.md -m gemini-1.5-flash
gr-ai -f example.js anotherFile.py -o README.md -m gemini-1.5-flash -tkn
gr-ai -f example.js anotherFile.py -o README.md -m llama3-8b-8192 --token
Provider | Models |
---|---|
gemini |
gemini-1.5-flash |
groq |
llama3-8b-8192 |
We welcome contributions to improve gimme_readme
! To get started with contributing, we ask that you read our contributing guide
We are committed to providing a welcoming and inclusive experience for everyone. By participating in this project, you agree to abide by our Code of Conduct.
This project is licensed under the MIT license. You are free to use, modify, and distribute this code, subject to the terms in the LICENSE file.
Developed by Peter Wan.
For any questions or feedback, feel free to reach out through the GitHub repository.