Skip to content

Commit

Permalink
Merge branch 'latest' into 3618-docs-rfc-unite-some-get-started-pages…
Browse files Browse the repository at this point in the history
…-into-one
  • Loading branch information
billy-the-fish authored Feb 6, 2025
2 parents ed4109a + 1547f53 commit 2a1c4a4
Show file tree
Hide file tree
Showing 86 changed files with 2,677 additions and 1,592 deletions.
1 change: 1 addition & 0 deletions .github/ISSUE_TEMPLATE/broken_link_report.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@ name: Broken link report
about: Automated issue template for reporting link checker fails
title: Failing link check ({{ date | date('dddd, MMMM Do') }})
labels: bug, automated issue, link check
assignees: atovpeko
---

The broken link check failed. Check [the workflow logs](https://github.com/timescale/docs/actions/workflows/daily-link-checker.yml) to identify the failing links.
72 changes: 72 additions & 0 deletions .github/styles/templates/integration.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,72 @@
---
title: Integrate <third-party tool name> with Timescale Cloud
excerpt: SEO friendly explanation of why the user will do it
keywords: [<third-party tool name>]
---

import IntegrationPrereqs from "versionContent/_partials/_integration-prereqs.mdx";

# Integrate <third-party tool name> with $CLOUD_LONG

// Explain what the third-party tool is and what it does in their own words, link to product docs.

// Provide context for the integration steps, for example, if an additional connector is used.

// See https://docs.timescale.com/use-timescale/latest/integrations/grafana/ for an example.

## Prerequisites

<IntegrationPrereqs />

- Install <third-party tool name> // Mention both server and cloud versions, if present. Link to installation pages.

## Connect your $SERVICE_LONG

To connect to $CLOUD_LONG:

<Procedure>

1. **Log in to <third-party tool name>**

// Sub-steps, code, screenshots if necessary.

1. **Configure the connection**

// Sub-steps, code, screenshots if necessary. Link to [Find your connection details][connection-info].

...

1. **Test the connection**

// Sub-steps, code, screenshots if necessary.

</Procedure>

## Test the integration with $CLOUD_LONG

// Add only if there is a simple way to illustrate how the two solutions work together.

Take the following steps to <whatever the tool must do in conjunction with Timescale Cloud>:

<Procedure>

// Steps to test out the integration using a defined dataset.

</Procedure>

You have successfully integrated <third-party tool> with $CLOUD_LONG.

[connection-info]: /use-timescale/:currentVersion:/integrations/find-connection-details/













20 changes: 20 additions & 0 deletions .github/workflows/deploy-lambdas.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
name: Deploy redirects to lambda
on:
push:
paths:
- lambda/**

permissions: {}

jobs:
trigger:
name: Update changed lambda redirects
runs-on: ubuntu-latest
steps:
- name: Repository Dispatch
uses: peter-evans/repository-dispatch@26b39ed245ab8f31526069329e112ab2fb224588
with:
token: ${{ secrets.ORG_AUTOMATION_TOKEN }}
repository: timescale/web-documentation
event-type: build-lambda
client-payload: '{ "branch": "${{ github.ref_name }}" }'
4 changes: 2 additions & 2 deletions _partials/_cloud-connect.md
Original file line number Diff line number Diff line change
Expand Up @@ -50,8 +50,8 @@ Quick recap. You:
[portal-data-mode]: https://console.cloud.timescale.com/dashboard/services?popsql
[account-portal]: https://console.cloud.timescale.com/dashboard/account
[services-portal]: https://console.cloud.timescale.com/dashboard/services
[install-psql]: /use-timescale/:currentVersion:/integrations/query-admin/psql/
[install-psql]: /use-timescale/:currentVersion:/integrations/psql/
[popsql]: /getting-started/:currentVersion:/run-queries-from-console/#data-mode
[run-sqleditor]: /getting-started/:currentVersion:/run-queries-from-console/#sql-editor
[install-psql]: /use-timescale/:currentVersion:/integrations/query-admin/psql/
[install-psql]: /use-timescale/:currentVersion:/integrations/psql/
[hypertables]: /use-timescale/:currentVersion:/hypertables/about-hypertables/#hypertable-partitioning
2 changes: 1 addition & 1 deletion _partials/_cloud-create-connect-tutorials.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,4 +39,4 @@ command-line utility. If you've used PostgreSQL before, you might already have
</Procedure>
[timescale-portal]: https://console.cloud.timescale.com/
[install-psql]: /use-timescale/:currentVersion:/integrations/query-admin/psql/
[install-psql]: /use-timescale/:currentVersion:/integrations/psql/
2 changes: 1 addition & 1 deletion _partials/_cloud-intro.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ use as is, or extend with capabilities specific to your business needs. The avai
analytics and other use cases.
Get faster time-based queries with hypertables, continuous aggregates, and columnar storage. Save on storage with
native compression, data retention policies, and bottomless data tiering to Amazon S3.
- **AI and vector]**: PostgreSQL with vector extensions. Use PostgreSQL as a vector database with
- **AI and vector**: PostgreSQL with vector extensions. Use PostgreSQL as a vector database with
purpose built extensions for building AI applications from start to scale. Get fast and accurate similarity search
with the pgvector and pgvectorscale extensions. Create vector embeddings and perform LLM reasoning on your data with
the pgai extension.
Expand Down
133 changes: 133 additions & 0 deletions _partials/_cloudwatch-data-exporter.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,133 @@
<Procedure>

1. **In $CONSOLE, open [Integrations][console-integrations]**
1. **Click `New exporter`**
1. **Select the data type and specify `AWS CloudWatch` for provider**

![Add CloudWatch data exporter](https://assets.timescale.com/docs/images/tsc-integrations-cloudwatch.png)

1. **Provide your AWS CloudWatch configuration**

- The AWS region must be the same for your $CLOUD_LONG exporter and AWS CloudWatch Log group.
- The exporter name appears in Cloud console, best practice is to make this name easily understandable.
- For CloudWatch credentials, either use an [existing CloudWatch Log group][console-cloudwatch-configuration]
or [create a new one][console-cloudwatch-create-group]. If you're uncertain, use
the default values. For more information, see [Working with log groups and log streams][cloudwatch-log-naming].

1. **Choose the authentication method to use for the exporter**

<Tabs label="Authentication methods">

<Tab title="IAM role">

<Procedure>

$SERVICE_LONGs run in AWS. Best practice is to use [IAM Roles for Service Accounts (IRSA)][irsa] to
manage access between your $SERVICE_SHORTs and your AWS resources.

Create the IRSA role following this [AWS blog post][cross-account-iam-roles].

When you create the IAM OIDC provider:
- Set the URL to the [region where the exporter is being created][reference].
- Add the role as a trusted entity.

The following example shows a correctly configured IRSA role:

- Permission Policy:

```json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"logs:PutLogEvents",
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:DescribeLogStreams",
"logs:DescribeLogGroups",
"logs:PutRetentionPolicy",
"xray:PutTraceSegments",
"xray:PutTelemetryRecords",
"xray:GetSamplingRules",
"xray:GetSamplingTargets",
"xray:GetSamplingStatisticSummaries",
"ssm:GetParameters"
],
"Resource": "*"
}
]
}
```
- Role with a Trust Policy:

```json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Federated": "arn:aws:iam::12345678910:oidc-provider/irsa-oidc-discovery-prod.s3.us-east-1.amazonaws.com"
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringEquals": {
"irsa-oidc-discovery-prod.s3.us-east-1.amazonaws.com:aud": "sts.amazonaws.com"
}
}
},
{
"Sid": "Statement1",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::12345678910:role/my-exporter-role"
},
"Action": "sts:AssumeRole"
}
]
}
```

</Procedure>

</Tab>

<Tab title="CloudWatch credentials">

<Procedure>

When you use CloudWatch credentials, you link an Identity and Access Management (IAM)
user with access to CloudWatch only with your $SERVICE_LONG:

1. Retrieve the user information from [IAM > Users in AWS console][list-iam-users].

If you do not have an AWS user with access restricted to CloudWatch only,
[create one][create-an-iam-user].
For more information, see [Creating IAM users (console)][aws-access-keys].

1. Enter the credentials for the AWS IAM user.

AWS keys give access to your AWS services. To keep your AWS account secure, restrict users to the minimum required permissions. Always store your keys in a safe location. To avoid this issue, use the IAM role authentication method.

</Procedure>

</Tab>

</Tabs>

1. Select the AWS Region your CloudWatch services run in, then click `Create exporter`.

</Procedure>

[console-integrations]: https://console.cloud.timescale.com/dashboard/integrations
[console-cloudwatch-configuration]: https://console.aws.amazon.com/cloudwatch/home#logsV2:log-groups
[console-cloudwatch-create-group]: https://console.aws.amazon.com/cloudwatch/home#logsV2:log-groups/create-log-group
[cloudwatch-log-naming]: https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/Working-with-log-groups-and-streams.html
[cross-account-iam-roles]: https://aws.amazon.com/blogs/containers/cross-account-iam-roles-for-kubernetes-service-accounts/
[reference]: #reference
[list-iam-users]: https://console.aws.amazon.com/iam/home#/users
[create-an-iam-user]: https://console.aws.amazon.com/iam/home#/users/create
[aws-access-keys]: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_create.html#id_users_create_console
[irsa]: https://aws.amazon.com/blogs/opensource/introducing-fine-grained-iam-roles-service-accounts/
4 changes: 2 additions & 2 deletions _partials/_create-hypertable.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,8 +40,8 @@ To create a hypertable:
</Procedure>

[services-portal]: https://console.cloud.timescale.com/dashboard/services
[install-psql]: /use-timescale/:currentVersion:/integrations/query-admin/psql/
[install-psql]: /use-timescale/:currentVersion:/integrations/psql/
[popsql]: /getting-started/:currentVersion:/run-queries-from-console/#data-mode
[run-sqleditor]: /getting-started/:currentVersion:/run-queries-from-console/#sql-editor
[install-psql]: /use-timescale/:currentVersion:/integrations/query-admin/psql/
[install-psql]: /use-timescale/:currentVersion:/integrations/psql/
[hypertables]: /use-timescale/:currentVersion:/hypertables/about-hypertables/#hypertable-partitioning
17 changes: 17 additions & 0 deletions _partials/_datadog-data-exporter.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
<Procedure>

1. **In $CONSOLE, open [Integrations][console-integrations]**
1. **Click `New exporter`**
1. **Select `Metrics` for `Data type` and `Datadog` for provider**

![Add Datadog exporter](https://assets.timescale.com/docs/images/tsc-integrations-datadog.webp)

1. **Choose your AWS region and provide the API key**

The AWS region must be the same for your $CLOUD_LONG exporter and the Datadog provider.

1. **Set `Site` to your Datadog region, then click `Create exporter`**

</Procedure>

[console-integrations]: https://console.cloud.timescale.com/dashboard/integrations
33 changes: 13 additions & 20 deletions _partials/_grafana-connect.md
Original file line number Diff line number Diff line change
@@ -1,11 +1,14 @@
## Prerequisites

* [Create a target $SERVICE_LONG][create-service]
* Install [self-managed Grafana][grafana-self-managed], or sign up for [Grafana Cloud][grafana-cloud]
import IntegrationPrereqs from "versionContent/_partials/_integration-prereqs.mdx";

## Add your $SERVICE_LONG as a data source
<IntegrationPrereqs />

To connect the data in your $SERVICE_LONG to Grafana:
* Install [self-managed Grafana][grafana-self-managed] or sign up for [Grafana Cloud][grafana-cloud].

## Add your $SERVICE_SHORT as a data source

To connect the data in your $SERVICE_SHORT to Grafana:

<Procedure>

Expand All @@ -14,27 +17,16 @@ To connect the data in your $SERVICE_LONG to Grafana:
In your browser, log in to either:
- Self-hosted Grafana: at `http://localhost:3000/`. The default credentials are `admin`, `admin`.
- Grafana Cloud: use the URL and credentials you set when you created your account.
1. **Add your $SERVICE_LONG as a data source**
1. **Add your $SERVICE_SHORT as a data source**
1. Open `Connections` > `Data sources`, then click `Add new data source`.
1. Select `PostgreSQL` from the list.
1. Configure the following fields:
- `Host URL`: the host and port for your $SERVICE_SHORT, in this format: `<HOST>:<PORT>`.
- `Database name`: the name to use for the dataset.
- `Username`: `tsdbadmin`, or another privileged user.
- `Password`: the password for `User`.
- `Database`: `tsdb`.
1. Configure the connection:
- `Host URL`, `Username`, `Password`, and `Database`: configure using your [connection details][connection-info].
- `Database name`: provide the name for your dataset.
- `TLS/SSL Mode`: select `require`.
- `PostgreSQL options`: enable `TimescaleDB`.
- Leave the default setting for all other fields.

Get the values for `Host URL` and `Password` from the connection string generated when you created your $SERVICE_LONG. For example, in the following connection string:

```bash
postgres://tsdbadmin:krifchuf3r8c5onn@s5pq0es2cy.vfbtkqzhtm.tsdb.cloud.timescale.com:39941/tsdb?sslmode=require
```

`krifchuf3r8c5onn` is the password and `s5pq0es2cy.vfbtkqzhtm.tsdb.cloud.timescale.com:39941` is the host URL in the required format.

1. **Click `Save & test`**

Grafana checks that your details are set correctly.
Expand All @@ -44,4 +36,5 @@ To connect the data in your $SERVICE_LONG to Grafana:
[grafana-self-managed]: https://grafana.com/get/?tab=self-managed
[grafana-cloud]: https://grafana.com/get/
[cloud-login]: https://console.cloud.timescale.com/
[create-service]: /getting-started/:currentVersion:/services/
[create-service]: /getting-started/:currentVersion:/services/
[connection-info]: /use-timescale/:currentVersion:/integrations/find-connection-details/
5 changes: 5 additions & 0 deletions _partials/_integration-prereqs-cloud-only.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
Before integrating:

* Create a [target $SERVICE_LONG][create-service].

[create-service]: /getting-started/:currentVersion:/services/
9 changes: 9 additions & 0 deletions _partials/_integration-prereqs.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
Before integrating:

* Create a [target $SERVICE_LONG][create-service]. You need [your connection details][connection-info] to follow this procedure.

This procedure also works for [self-hosted $TIMESCALE_DB][enable-timescaledb].

[create-service]: /getting-started/:currentVersion:/services/
[enable-timescaledb]: /self-hosted/:currentVersion:/install/
[connection-info]: /use-timescale/:currentVersion:/integrations/find-connection-details/
Loading

0 comments on commit 2a1c4a4

Please sign in to comment.