Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Getting Started | Defog Docs #702

Open
1 task
irthomasthomas opened this issue Mar 5, 2024 · 1 comment
Open
1 task

Getting Started | Defog Docs #702

irthomasthomas opened this issue Mar 5, 2024 · 1 comment
Labels
CLI-UX Command Line Interface user experience and best practices code-generation code generation models and tools like copilot and aider data-validation Validating data structures and formats dataset public datasets and embeddings software-engineering Best practice for software engineering

Comments

@irthomasthomas
Copy link
Owner

Getting Started | Defog Docs

Description:
Let's discover Defog in less than 5 minutes. Here's a 1-minute demo of the setup.

What you'll need:

  • A Defog API Key. Sign up here for a free API key.
  • Python 3
  • Pip
  • Drivers for your database

If you do not have pip installed on your computer, you can install it here.

Initializing Defog:

Get started by installing our CLI interface.

To do this, just run the following commands on your terminal:

pip install --upgrade 'defog[snowflake]'

If you're not using Snowflake, then you can replace the above with:

pip install --upgrade 'defog[postgres]'

or

pip install --upgrade 'defog[mysql]'

or

pip install --upgrade 'defog[bigquery]'
defog init

From here, you can just follow the instructions in the CLI wizard to get started!

Important Note: Doing this does not send your database credentials to Defog! This is only done so we can get your database metadata (table names, column names, column descriptions), and so you can execute the queries generated by Defog locally on your machine.

Generating a CSV file for your metadata:

To generate a CSV file that contains your metadata, just list all the tables names you want, separated by a space.

defog gen <table1> <table2> <table3> ...

Once generated, you can edit the CSV to give an accurate description of your metadata. We have included steps to do that in this Cookbook.

Uploading your metadata:

To upload your metadata to Defog, just pass in the path to your CSV via defog update.

defog update path_to_metadata_csv

For example, if your CSV is in the current folder as defog_metadata.csv, you should just run defog update defog_metadata.csv.

Making a test query:

Once your metadata has been updated, you can start making test queries. An example query is below:

defog query 'how many users do we have?'

(Optional) Deploying to AWS Lambda or GCP Functions:

If you want to deploy Defog as a microservice on your cloud, you can run the following functions:

# for AWS Lambda
# before this, you must install chalice with `pip install chalice`
defog deploy aws
# for GCP Functions
# before this, you must have gcloud installed on your machine
defog deploy gcp

Once you do this, you will see the API endpoint for your microservice in your console.

URL: https://docs.defog.ai/getting-started/

Suggested labels

@irthomasthomas irthomasthomas added CLI-UX Command Line Interface user experience and best practices code-generation code generation models and tools like copilot and aider data-validation Validating data structures and formats dataset public datasets and embeddings software-engineering Best practice for software engineering labels Mar 5, 2024
@irthomasthomas
Copy link
Owner Author

irthomasthomas commented Mar 5, 2024

Related content

#688 - Similarity score: 0.86

#623 - Similarity score: 0.85

#696 - Similarity score: 0.85

#640 - Similarity score: 0.85

#461 - Similarity score: 0.85

#644 - Similarity score: 0.84

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLI-UX Command Line Interface user experience and best practices code-generation code generation models and tools like copilot and aider data-validation Validating data structures and formats dataset public datasets and embeddings software-engineering Best practice for software engineering
Projects
None yet
Development

No branches or pull requests

1 participant