diff --git a/_docs/_user_guide/data_and_analytics/cloud_ingestion/connected_sources.md b/_docs/_user_guide/data_and_analytics/cloud_ingestion/connected_sources.md index 22e8f37ab6d..1d44b3cbffe 100644 --- a/_docs/_user_guide/data_and_analytics/cloud_ingestion/connected_sources.md +++ b/_docs/_user_guide/data_and_analytics/cloud_ingestion/connected_sources.md @@ -9,21 +9,21 @@ page_type: reference # Connected Sources -> Connected sources are a zero-copy alternative to directly syncing data with Braze’s Cloud Data Ingestion (CDI) feature. With a connected source, directly query your data warehouse to create new segments–without copying any of the underlying data to Braze. +> Connected sources are a zero-copy alternative to directly syncing data with Braze’s Cloud Data Ingestion (CDI) feature. A connected source directly queries your data warehouse to create new segments without copying any of the underlying data to Braze. -Once a connected source is added to your Braze workspace, you can create a CDI segment within Segment Extensions. CDI Segments lets you write SQL that directly queries on your own data warehouse (using data there that’s made available via your CDI Connected Source), and creates and maintains a group of users that can be targeted within Braze. +After adding a connected source to your Braze workspace, you can create a CDI segment within Segment Extensions. CDI segments let you write SQL that directly queries your data warehouse (using data there that’s made available through your CDI Connected Source), and creates and maintains a group of users that can be targeted within Braze. -For more information on creating a segment with this source, see the [CDI Segments documentation]({{site.baseurl}}/user_guide/engagement_tools/segments/segment_extension/cdi_segments/). +For more information on creating a segment with this source, view [CDI segments]({{site.baseurl}}/user_guide/engagement_tools/segments/segment_extension/cdi_segments/). {% alert warning %} -Because connected sources run on your data warehouse directly, you will incur all costs associated with running these queries in your data warehouse. Connected sources do not consume data points, and CDI segments do not consume SQL Segment credits. +Because connected sources run on your data warehouse directly, you will incur all costs associated with running these queries in your data warehouse. Connected sources don't consume data points, and CDI segments don't consume SQL segment credits. {% endalert %} ## Integrating connected sources ### Step 1: Connect your resources -Cloud Data Ingestion connected sources require some setup on the Braze side and in your instance. Follow these steps to set up the integration−some steps will be done in your data warehouse and some steps will be done in your Braze dashboard. +Cloud Data Ingestion connected sources require some setup on Braze and in your instance. Follow these steps to set up the integration−some steps will be done in your data warehouse and some steps will be done in your Braze dashboard. {% tabs %} {% tab Snowflake %} @@ -77,7 +77,7 @@ There may be two to five minutes of warm-up time when Braze connects to Classic ### Step 2: Set up your data warehouse -First, set up the source data and required resources in your data warehouse environment. The connected source may reference one or more tables, so ensure your Braze user has permissions for all tables you want available in the connected source. +Set up the source data and required resources in your data warehouse environment. The connected source may reference one or more tables, so ensure your Braze user has permission to access all tables you want in the connected source. {% tabs %} {% tab Snowflake %} @@ -114,7 +114,7 @@ GRANT USAGE ON WAREHOUSE BRAZE_INGESTION_WAREHOUSE TO ROLE BRAZE_INGESTION_ROLE; ``` {% alert note %} -The warehouse will need to have the **auto-resume** flag on. If not, you will need to grant Braze additional `OPERATE` privileges on the warehouse for Braze to turn it on when it's time to run the query. +The warehouse needs to have the **auto-resume** flag turned on. If it's not, you'll need to grant Braze additional `OPERATE` privileges on the warehouse for Braze to turn it on when it's time to run the query. {% endalert %} #### Step 2.3: Set up the user @@ -130,9 +130,9 @@ You will share connection information with Braze and receive a public key to app When connecting different workspaces to the same Snowflake account, you must create a unique user for each Braze workspace where you are creating an integration. Within a workspace, you can reuse the same user across integrations, but integration creation will fail if a user on the same Snowflake account is duplicated across workspaces. {% endalert %} -#### Step 2.4: Allow Braze IPs in Snowflake network policy (optional) +#### Step 2.4: Allow Braze IPs in your Snowflake network policy (optional) -Depending on the configuration of your Snowflake account, you may need to allow the following IP addresses in your Snowflake network policy. For more information on enabling this, see the relevant Snowflake documentation on [modifying a network policy](https://docs.snowflake.com/en/user-guide/network-policies.html#modifying-network-policies). +Depending on the configuration of your Snowflake account, you may need to allow the following IP addresses in your Snowflake network policy. For more information on doing this, view the relevant Snowflake documentation on [modifying a network policy](https://docs.snowflake.com/en/user-guide/network-policies.html#modifying-network-policies). {% subtabs %} {% subtab United States (US) %} @@ -176,7 +176,7 @@ Create a user for your connected source to use. This user will be used to genera You may choose to grant access to all tables in a schema, or grant privileges only to specific tables. Whichever tables the Braze role has access to will be available to query in the CDI segment. Be sure to grant access to any new tables to the user when they're created, or set default permissions for the user. - The `create table` permission is required so Braze can create a table with your CDI Segment query results before updating the segment in Braze. Braze will create a temporary table per segment, and the table will only persist while Braze is updating the segment. +The `create table` permission is required so Braze can create a table with your CDI segment query results before updating the segment in Braze. Braze will create a temporary table per segment, which will only persist while Braze updates the segment. #### Step 2.2: Allow access to Braze IPs @@ -185,7 +185,7 @@ If you have a firewall or other network policies, you must give Braze network ac You may also need to change your security groups to allow Braze access to your data in Redshift. Make sure to explicitly allow inbound traffic on the IPs below and on the port used to query your Redshift cluster (default is 5439). You should explicitly allow Redshift TCP connectivity on this port even if the inbound rules are set to "allow all". In addition, it is important that the endpoint for the Redshift cluster be publicly accessible in order for Braze to connect to your cluster. -If you do not want your Redshift cluster to be publicly accessible, you can set up a VPC and EC2 instance to use an ssh tunnel to access the Redshift data. For more information, see [AWS: How do I access a private Aamazon Redshift cluster from my local machine?](https://repost.aws/knowledge-center/private-redshift-cluster-local-machine) +If you don't want your Redshift cluster to be publicly accessible, you can set up a VPC and EC2 instance to use an ssh tunnel to access the Redshift data. For more information, see [AWS: How do I access a private Amazon Redshift cluster from my local machine?](https://repost.aws/knowledge-center/private-redshift-cluster-local-machine) {% subtabs %} {% subtab United States (US) %} @@ -220,11 +220,11 @@ For instances `EU-01` and `EU-02`, these are the relevant IP addresses: Create a service account in GCP for Braze to use to connect and read data from your table(s). The service account should have the below permissions: -- **BigQuery Connection User:** This will allow Braze to make connections. -- **BigQuery User:** This will provide Braze access to run queries, read dataset metadata, and list tables. -- **BigQuery Data Viewer:** This will provide Braze access to view datasets and their contents. -- **BigQuery Job User:** This will provide Braze access to run jobs. -- **bigquery.tables.create** This will provide Braze access to create temporary tables during segment refresh. +- **BigQuery Connection User:** Allows Braze to make connections. +- **BigQuery User:** Provides Braze access to run queries, read dataset metadata, and list tables. +- **BigQuery Data Viewer:** Provides Braze access to view datasets and their contents. +- **BigQuery Job User:** Provides Braze access to run jobs. +- **bigquery.tables.create** Provides Braze access to create temporary tables during segment refresh. Create a service account for your connected source to use. This user will be used to generate the list of tables available in your CDI segments, and to query source tables to create new segments. After the connected source is created, Braze will discover the names and description of all tables available to the user in the source schema. @@ -232,7 +232,7 @@ You may choose to grant access to all tables in a dataset, or grant privileges o The `create table` permission is required so Braze can create a table with your CDI Segment query results before updating the segment in Braze. Braze will create a temporary table per segment, and the table will only persist while Braze is updating the segment. -After creating the service account and granting permissions, generate a JSON key. For more information, see [Google Cloud: Create and delete service account keys](https://cloud.google.com/iam/docs/keys-create-delete). You'll upload this to the Braze dashboard later. +After creating the service account and granting permissions, generate a JSON key. For more information, view [Google Cloud: Create and delete service account keys](https://cloud.google.com/iam/docs/keys-create-delete). You'll upload this to the Braze dashboard later. #### Step 2.2: Allow access to Braze IPs @@ -267,22 +267,22 @@ For instances `EU-01` and `EU-02`, these are the relevant IP addresses: {% endtab %} {% tab Databricks %} -#### Step 2.1: Create a Access Token +#### Step 2.1: Create an access token -In order for Braze to access Databricks, a personal access token needs to be created. +For Braze to access Databricks, a personal access token needs to be created. -1. In your Databricks workspace, click your Databricks username in the top bar, and then select **User Settings** from the drop-down. +1. In your Databricks workspace, select your Databricks username in the top bar, then select **User Settings** from the dropdown. 2. Make sure the service account has `CREATE TABLE` priviliges on the schema used for the connected souce. -3. On the Access tokens tab, click **Generate new token**. +3. On the **Access tokens** tab, select **Generate new token**. 4. Enter a comment that helps you to identify this token, such as "Braze CDI", and change the token’s lifetime to no lifetime by leaving the Lifetime (days) box empty (blank). -5. Click **Generate**. -6. Copy the displayed token, and then click **Done**. +5. Select **Generate**. +6. Copy the displayed token, and then select **Done**. This token will be used to generate the list of tables available in your CDI segments, and to query source tables to create new segments. After the connected source is created, Braze will discover the names and description of all tables available to the user in the source schema. You may choose to grant access to all tables in a schema, or grant privileges only to specific tables. Whichever tables the Braze role has access to will be available to query in the CDI segment. - The `create table` permission is required so Braze can create a table with your CDI Segment query results before updating the segment in Braze. Braze will create a temporary table per segment, and the table will only persist while Braze is updating the segment. +The `create table` permission is required so Braze can create a table with your CDI segment query results before updating the segment in Braze. Braze will create a temporary table per segment, which will only persist while Braze updates the segment. Keep the token in a safe place until you need to enter it on the Braze dashboard during the credential creation step. @@ -321,11 +321,13 @@ For instances `EU-01` and `EU-02`, these are the relevant IP addresses: ### Step 3: Create a connected source in the Braze dashboard -{% tabs local %} +{% tabs %} {% tab Snowflake %} #### Step 3.1: Add Snowflake connection information and source table -Next, create a connected source in the Braze dashboard. Go to **Data Settings** > **Cloud Data Ingestion**. Navigate to the **Connected Sources** tab and click **Create data connection**. +Create a connected source in the Braze dashboard. Go to **Data Settings** > **Cloud Data Ingestion** > **Connected Sources**, and then select **Create new data sync** > **Snowflake Import**. + +![]({% image_buster /assets/img/cloud_ingestion/connected_source_tab.png %}){: style="max-width:80%;"} Input the information for your Snowflake data warehouse and source schema, then proceed to the next step. @@ -333,7 +335,7 @@ Input the information for your Snowflake data warehouse and source schema, then #### Step 3.2: Configure sync details -Next, choose a name for connected source. This name will be used in the list of available sources when you create a new CDI Segment. +Choose a name for the connected source. This name will be used in the list of available sources when you create a new CDI segment. Configure a maximum runtime for this source. Braze will automatically abort any queries that exceed the maximum runtime when it's creating or refreshing a segment. The maximum runtime allowed is 60 minutes; a lower runtime will reduce costs incurred on your Snowflake account. @@ -345,115 +347,136 @@ If queries are consistently timing out and you have set a maximum runtime of 60 #### Step 3.3: Note the public key -In the **Test connection** page, you will see a RSA public key. Note this down. You will need it complete the integration in Snowflake. +In the **Test connection** step, take note of the RSA public key. You'll need it to complete the integration in Snowflake. ![]({% image_buster /assets/img/cloud_ingestion/connected_source_sf_3.png %}) {% endtab %} -{% endtabs %} +{% tab Redshift %} +#### Step 3.1: Add Redshift connection information and source table -### Step 4: Finalize the data warehouse configuration +Create a connected source in the Braze dashboard. Go to **Data Settings** > **Cloud Data Ingestion** > **Connected Sources**, and then select **Create data connection** > **Amazon Redshift Import**. -{% tabs %} -{% tab Snowflake %} -Add the public key you noted during the last step to your user in Snowflake. This will allow Braze to connect to Snowflake. For details on how to do this, see the [Snowflake documentation](https://docs.snowflake.com/en/user-guide/key-pair-auth.html). +![]({% image_buster /assets/img/cloud_ingestion/connected_source_tab.png %}){: style="max-width:80%;"} -If you want to rotate the keys at any point, Braze can generate a new key pair and provide you with the new public key. +Input the information for your Redshift data warehouse and source schema, then proceed to the next step. -```json -ALTER USER BRAZE_INGESTION_USER SET rsa_public_key='{INSERT_YOUR_KEY}'; -``` +![]({% image_buster /assets/img/cloud_ingestion/connected_source_rd_1.png %}) -Once you have added the key to the user in Snowflake, select **Test Connection** in Braze, and then select **Done**. Your connected source is now created and ready to use in CDI Segments. -{% endtab %} +#### Step 3.2: Configure sync details -{% tab Redshift %} -#### Step 4.1: Add Redshift connection information and source table +Choose a name for the connected source. This name will be used in the list of available sources when you create a new CDI segment. -Go to **Data Settings** > **Cloud Data Ingestion**. Navigate to the **Connected Sources** tab and click **Create data connection**. +Configure a maximum runtime for this source. Braze will automatically abort any queries that exceed the maximum runtime when it's creating or refreshing a segment. The maximum runtime allowed is 60 minutes; a lower runtime will reduce costs incurred on your Redshift account. {% alert note %} -If you are using the [older navigation]({{site.baseurl}}/navigation), go to **Cloud Data Ingestion** under **Data**. +If queries are consistently timing out and you have set a maximum runtime of 60 minutes, consider trying to optimize your query execution time or dedicating a larger warehouse to the Braze user. {% endalert %} -Input the information for your Redshift account and source schema, then proceed to the next step. +![]({% image_buster /assets/img/cloud_ingestion/connected_source_rd_2.png %}) -![]({% image_buster /assets/img/cloud_ingestion/connected_source_rs_1.png %}) +#### Step 3.3: Note the public key (optional) -#### Step 4.2: Configure sync details +If your credentials have **Connect with SSH Tunnel** selected, take note of the RSA public key in the **Test connection** step. You'll need it to complete the integration in Redshift. -Next, choose a name for connected source. This name will be used in the list of available sources when you create a new CDI Segment. +![]({% image_buster /assets/img/cloud_ingestion/connected_source_rd_3.png %}) -Configure a maximum runtime for this source. Braze will automatically abort any queries that exceed the maximum runtime when it's creating or refreshing a segment. The maximum runtime allowed is 60 minutes; a lower runtime will reduce costs incurred on your Snowflake account. +{% endtab %} +{% tab BigQuery %} +#### Step 3.1: Add BigQuery connection information and source table + +Create a connected source in the Braze dashboard. Go to **Data Settings** > **Cloud Data Ingestion** > **Connected Sources**, and then select **Create new data sync** > **Google BigQuery Import**. + +![]({% image_buster /assets/img/cloud_ingestion/connected_source_tab.png %}){: style="max-width:80%;"} + +Input the information for your BigQuery project and dataset, then proceed to the next step. + +![]({% image_buster /assets/img/cloud_ingestion/connected_source_bg_1.png %}) + +#### Step 3.2: Configure sync details + +Choose a name for the connected source. This name will be used in the list of available sources when you create a new CDI segment. + +Configure a maximum runtime for this source. Braze will automatically abort any queries that exceed the maximum runtime when it's creating or refreshing a segment. The maximum runtime allowed is 60 minutes; a lower runtime will reduce costs incurred on your BigQuery account. {% alert note %} -If queries are consistently timing out and you have set a maximum runtime of 60 minutes, consider trying to optimize your query execution time. +If queries are consistently timing out and you have set a maximum runtime of 60 minutes, consider trying to optimize your query execution time or dedicating a larger warehouse to the Braze user. {% endalert %} -![]({% image_buster /assets/img/cloud_ingestion/connected_source_sf_2.png %}) +![]({% image_buster /assets/img/cloud_ingestion/connected_source_bg_2.png %}) -#### Step 4.3: Test Connection +#### Step 3.3: Test the connection -Select **Test Connection** in Braze to verify that the list of tables visible to the user is what you expect, select **Done**. Your connected source is now created and ready to use in CDI Segments. +Select **Test Connection** to verify that the list of tables visible to the user is what you expect, then select **Done**. Your connected source is now created and ready to use in CDI segments. + +![]({% image_buster /assets/img/cloud_ingestion/connected_source_test_connection.png %}) -![]({% image_buster /assets/img/cloud_ingestion/connected_source_3.png %}) {% endtab %} +{% tab Databricks %} +#### Step 3.1: Add Databricks connection information and source table -{% tab BigQuery %} -#### Step 4.1: Add BigQuery connection information and source dataset +Create a connected source in the Braze dashboard. Go to **Data Settings** > **Cloud Data Ingestion** > **Connected Sources**, and then select **Create new data sync** > **Databricks Import**. + +![]({% image_buster /assets/img/cloud_ingestion/connected_source_tab.png %}){: style="max-width:80%;"} -Upload the JSON key and provide a name for the service account, then input the details of your source dataset. +Input the information for your Databricks credentials and, optional catalog and source schema, then proceed to the next step. -![]({% image_buster /assets/img/cloud_ingestion/connected_source_bq_1.png %}) +![]({% image_buster /assets/img/cloud_ingestion/connected_source_databricks_1.png %}) -#### Step 4.2: Configure sync details +#### Step 3.2: Configure sync details -Next, choose a name for connected source. This name will be used in the list of available sources when you create a new CDI Segment. +Choose a name for the connected source. This name will be used in the list of available sources when you create a new CDI segment. -Configure a maximum runtime for this source. Braze will automatically abort any queries that exceed the maximum runtime when it's creating or refreshing a segment. The maximum runtime allowed is 60 minutes; a lower runtime will reduce costs incurred on your Snowflake account. +Configure a maximum runtime for this source. Braze will automatically abort any queries that exceed the maximum runtime when it's creating or refreshing a segment. The maximum runtime allowed is 60 minutes; a lower runtime will reduce costs incurred on your Databricks account. {% alert note %} -If queries are consistently timing out and you have set a maximum runtime of 60 minutes, consider trying to optimize your query execution time. +If queries are consistently timing out and you have set a maximum runtime of 60 minutes, consider trying to optimize your query execution time or dedicating a larger warehouse to the Braze user. {% endalert %} -![]({% image_buster /assets/img/cloud_ingestion/connected_source_sf_2.png %}) +![]({% image_buster /assets/img/cloud_ingestion/connected_source_db_2.png %}) -#### Step 4.3: Test Connection +#### Step 3.3: Test the connection -Select **Test Connection** in Braze to verify that the list of tables visible to the user is what you expect, select **Done**. Your connected source is now created and ready to use in CDI Segments. +Select **Test Connection** to verify that the list of tables visible to the user is what you expect, then select **Done**. Your connected source is now created and ready to use in CDI segments. -![]({% image_buster /assets/img/cloud_ingestion/connected_source_3.png %}) -{% endtab %} +![]({% image_buster /assets/img/cloud_ingestion/connected_source_test_connection.png %}) -{% tab Databricks %} -#### Step 4.1: Add Databricks connection information and source dataset +{% endtab %} +{% endtabs %} -Input the information for your Databricks data warehouse and source data, then proceed to the next step. +### Step 4: Finalize the data warehouse configuration -![]({% image_buster /assets/img/cloud_ingestion/connected_source_db_1.png %}) +{% tabs %} +{% tab Snowflake %} +Add the public key you noted during the last step to your user in Snowflake. This will allow Braze to connect to Snowflake. For details on how to do this, see the [Snowflake documentation](https://docs.snowflake.com/en/user-guide/key-pair-auth.html). -#### Step 4.2: Configure sync details +If you want to rotate the keys at any point, Braze can generate a new key pair and provide you with the new public key. -Next, choose a name for connected source. This name will be used in the list of available sources when you create a new CDI Segment. +```json +ALTER USER BRAZE_INGESTION_USER SET rsa_public_key='{INSERT_YOUR_KEY}'; +``` -Configure a maximum runtime for this source. Braze will automatically abort any queries that exceed the maximum runtime when it's creating or refreshing a segment. The maximum runtime allowed is 60 minutes; a lower runtime will reduce costs incurred on your Snowflake account. +After you add the key to the user in Snowflake, select **Test Connection** in Braze, and then select **Done**. Your connected source is now created and ready to use in CDI segments. +{% endtab %} -{% alert note %} -If queries are consistently timing out and you have set a maximum runtime of 60 minutes, consider trying to optimize your query execution time. -{% endalert %} +{% tab Redshift %} +If connecting with an SSH tunnel, add the public key you noted during the last step to the SSH tunnel user. -![]({% image_buster /assets/img/cloud_ingestion/connected_source_sf_2.png %}) +After you add the key to the user, select **Test Connection** in Braze, and then select **Done**. Your connected source is now created and ready to use in CDI segments. -#### Step 4.3: Test Connection +{% endtab %} +{% tab BigQuery %} +This doesn't apply to BigQuery. -Select **Test Connection** in Braze to verify that the list of tables visible to the user is what you expect, select **Done**. Your connected source is now created and ready to use in CDI Segments. +{% endtab %} +{% tab Databricks %} +This doesn't apply to Databricks. -![]({% image_buster /assets/img/cloud_ingestion/connected_source_3.png %}) {% endtab %} {% endtabs %} {% alert note %} -You must successfully test a source before it can move from Draft to Active state. If you need to close out of the creation page, your integration will be saved, and you can revisit the details page to make changes and test. +You must successfully test a source before it can move from the "draft" to the "active" state. If you need to close out of the creation page, your integration will be saved, and you can revisit the details page to make changes and test. {% endalert %} ## Setting up additional integrations or users (optional) @@ -462,7 +485,7 @@ You must successfully test a source before it can move from Draft to Active stat {% tab Snowflake %} You may set up multiple integrations with Braze, but each integration should be configured to connect a different schema. When creating additional connections, you may reuse existing credentials if connecting to the same Snowflake account. -If you reuse the same user and role across integrations, you will not need to add the public key again. +If you reuse the same user and role across integrations, you won't need to add the public key again. {% endtab %} {% tab Redshift %} @@ -480,8 +503,8 @@ You may set up multiple sources with Braze, but each source should be configured ## Using the connected source -Once the source is created, it can be used to create one or more CDI Segments. For more information on creating a segment with this source, see the [CDI Segments documentation]({{site.baseurl}}/user_guide/engagement_tools/segments/segment_extension/cdi_segments/). +After the source is created, it can be used to create one or more CDI segments. For more information on creating a segment with this source, see the [CDI Segments documentation]({{site.baseurl}}/user_guide/engagement_tools/segments/segment_extension/cdi_segments/). {% alert note %} -If queries are consistently timing out and you have set a maximum runtime of 60 minutes, consider trying to optimize your query execution time or dedicating more compute resources (e.g., a larger warehouse) to the Braze user. +If queries are consistently timing out and you have set a maximum runtime of 60 minutes, consider trying to optimize your query execution time or dedicating more compute resources (such as a larger warehouse) to the Braze user. {% endalert %} diff --git a/assets/img/cloud_ingestion/connected_source_bg_1.png b/assets/img/cloud_ingestion/connected_source_bg_1.png new file mode 100644 index 00000000000..a0fc11039f4 Binary files /dev/null and b/assets/img/cloud_ingestion/connected_source_bg_1.png differ diff --git a/assets/img/cloud_ingestion/connected_source_bg_2.png b/assets/img/cloud_ingestion/connected_source_bg_2.png new file mode 100644 index 00000000000..11c9eb5b6e3 Binary files /dev/null and b/assets/img/cloud_ingestion/connected_source_bg_2.png differ diff --git a/assets/img/cloud_ingestion/connected_source_databricks_1.png b/assets/img/cloud_ingestion/connected_source_databricks_1.png new file mode 100644 index 00000000000..4f7fba3279c Binary files /dev/null and b/assets/img/cloud_ingestion/connected_source_databricks_1.png differ diff --git a/assets/img/cloud_ingestion/connected_source_db_2.png b/assets/img/cloud_ingestion/connected_source_db_2.png new file mode 100644 index 00000000000..5cc85d35480 Binary files /dev/null and b/assets/img/cloud_ingestion/connected_source_db_2.png differ diff --git a/assets/img/cloud_ingestion/connected_source_rd_1.png b/assets/img/cloud_ingestion/connected_source_rd_1.png new file mode 100644 index 00000000000..de1067f7739 Binary files /dev/null and b/assets/img/cloud_ingestion/connected_source_rd_1.png differ diff --git a/assets/img/cloud_ingestion/connected_source_rd_2.png b/assets/img/cloud_ingestion/connected_source_rd_2.png new file mode 100644 index 00000000000..363104bd3fa Binary files /dev/null and b/assets/img/cloud_ingestion/connected_source_rd_2.png differ diff --git a/assets/img/cloud_ingestion/connected_source_rd_3.png b/assets/img/cloud_ingestion/connected_source_rd_3.png new file mode 100644 index 00000000000..5a681bbfa2b Binary files /dev/null and b/assets/img/cloud_ingestion/connected_source_rd_3.png differ diff --git a/assets/img/cloud_ingestion/connected_source_sf_1.png b/assets/img/cloud_ingestion/connected_source_sf_1.png index 35b8dca2dd1..998412ed5ba 100644 Binary files a/assets/img/cloud_ingestion/connected_source_sf_1.png and b/assets/img/cloud_ingestion/connected_source_sf_1.png differ diff --git a/assets/img/cloud_ingestion/connected_source_tab.png b/assets/img/cloud_ingestion/connected_source_tab.png new file mode 100644 index 00000000000..5760e8aee18 Binary files /dev/null and b/assets/img/cloud_ingestion/connected_source_tab.png differ diff --git a/assets/img/cloud_ingestion/connected_source_test_connection.png b/assets/img/cloud_ingestion/connected_source_test_connection.png new file mode 100644 index 00000000000..e18e7df6b36 Binary files /dev/null and b/assets/img/cloud_ingestion/connected_source_test_connection.png differ