Skip to content
This repository has been archived by the owner on Aug 10, 2023. It is now read-only.

Commit

Permalink
Merge branch 'GoogleCloudPlatform:master' into master
Browse files Browse the repository at this point in the history
  • Loading branch information
ToddKopriva committed Jul 27, 2023
2 parents 1c15129 + 78632b2 commit 3cd3593
Show file tree
Hide file tree
Showing 30 changed files with 200 additions and 30 deletions.
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
pylint==2.4.0
google-cloud==0.34.0
Flask==1.1.1
Flask==2.3.2
kafka-python==1.4.6
pykafka==2.8.0
confluent-kafka==1.1.0
Expand Down
4 changes: 2 additions & 2 deletions archived/memorystore-oc/java/pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -30,12 +30,12 @@
<dependency>
<groupId>com.google.guava</groupId>
<artifactId>guava</artifactId>
<version>25.1-jre</version>
<version>32.0.0-jre</version>
</dependency>
<dependency>
<groupId>io.grpc</groupId>
<artifactId>grpc-protobuf</artifactId>
<version>1.15.1</version>
<version>1.53.0</version>
</dependency>
<dependency>
<groupId>io.opencensus</groupId>
Expand Down
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
2 changes: 1 addition & 1 deletion tutorials/cloud-functions-avro-import-bq/index.js
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ exports.ToBigQuery_Stage = (event, callback) => {
// Do not use the ftp_files Bucket to ensure that the bucket does not get crowded.
// Change bucket to gas_ddr_files_staging
// Set the table name (TableId) to the full file name including date,
// this will give each table a new distinct name and we can keep a record of all of the files recieved.
// this will give each table a new distinct name and we can keep a record of all of the files received.
// This may not be the best way to do this... at some point we will need to archive and delete prior records.
const dashOffset = filename.indexOf('-');
const tableId = filename.substring(0, dashOffset) + '_STAGE';
Expand Down
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
171 changes: 171 additions & 0 deletions tutorials/coral-talk-on-cloud-run/index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,171 @@
---
title: Coral Talk on Google Cloud Run
description: How to deploy Coral Talk on Google Cloud Platform using managed services - Cloud Run and Memorystore.
author: vyolla
tags: cloud-run, memorystore
date_published: 2022-04-04
---

Bruno Patrocinio | Customer Engineer | Google


<p style="background-color:#CAFACA;"><i>Contributed by Google employees.</i></p>

This tutorial describes how to deploy an open-source commenting platform, [Coral Talk](https://docs.coralproject.net/) on Google Cloud Platform using managed services.

The diagram below shows the general flow:
![architecture](images/architecture.png)

The instructions are provided for a Linux development environment, such as [Cloud Shell](https://cloud.google.com/shell/).
However, you can also run the application on Google Compute Engine, Kubernetes, a serverless environment, or outside of
Google Cloud.

This tutorial assumes that you know the basics of the following products and services:

- [Cloud Run](https://cloud.google.com/run/docs)
- [Container Registry](https://cloud.google.com/container-registry/docs)
- [Memorystore](https://cloud.google.com/memorystore/docs)
- [Compute Engine](https://cloud.google.com/compute/docs)
- [`gcloud`](https://cloud.google.com/sdk/docs)
- [Docker](https://docs.docker.com/engine/reference/commandline/run)

## Objectives

* Learn how to create and deploy services using `gcloud` commands.
* Show how to deploy an application with Cloud Run and Memory Store

## Costs

This tutorial uses billable components of Google Cloud, including the following:

* [Cloud Run](https://cloud.google.com/run)
* [Compute Engine](https://cloud.google.com/compute)
* [Memorystore](https://cloud.google.com/memorystore)

Use the [Pricing Calculator](https://cloud.google.com/products/calculator) to generate a cost estimate based on your
projected usage.

This tutorial only generates a small amount of Cloud Run requests, which may fall within the free allotment.



## Before you begin


For this tutorial, you need a Google Cloud [project](https://cloud.google.com/resource-manager/docs/cloud-platform-resource-hierarchy#projects).
You can create a new one, or you can select a project that you have already created:

1. Select or create a Google Cloud project.

[GO TO THE MANAGE RESOURCES PAGE](https://console.cloud.google.com/cloud-resource-manager)

2. Enable billing for your project.

[ENABLE BILLING](https://support.google.com/cloud/answer/6293499#enable-billing)

3. Enable the Cloud Run and Artifact Registry APIs. For details, see [ENABLING APIs](https://cloud.google.com/apis/docs/getting-started#enabling_apis).

4. Add role *Artifact Registry Service Agent* in service account `[project-id]-compute@developer.gserviceaccount.com`


## Detailed steps


### Download images and upload to Google Artifact Registry

#### 1.Artifact Registry
```
gcloud artifacts repositories create coral-demo --repository-format=docker --location=us-central1
gcloud auth configure-docker us-central1-docker.pkg.dev
```

#### 1. Mongo
```
docker pull mongo:4.2
docker tag mongo:4.2 us-central1-docker.pkg.dev/{my-project}/coral-talk/mongo
docker push us-central1-docker.pkg.dev/{my-project}/coral-talk/mongo
```
#### 2. Coral Talk
```
docker pull coralproject/talk:6
docker tag coralproject/talk:6 us-central1-docker.pkg.dev/{my-project}/coral-talk/talk
docker push us-central1-docker.pkg.dev/{my-project}/coral-talk/talk
```

### Create VPC Network
```
gcloud compute networks create coral --project={my-project} \
--subnet-mode=custom --mtu=1460 --bgp-routing-mode=regional
```

```
gcloud compute networks subnets create talk --project={my-project} \
--range=10.0.0.0/9 --network=coral --region=us-central1 \
--secondary-range=serverless=10.130.0.0/28
```

### Create Serverless VPC access
```
gcloud compute networks vpc-access connectors create talk \
--region=us-central1 \
--network=talk \
--range=10.130.0.0/28 \
--min-instances=2 \
--max-instances=3 \
--machine-type=f1-micro
```

### Create Memorystore Redis instance
```
gcloud redis instances create myinstance --size=2 --region=us-central1 \
--redis-version=redis_3_2
```

### Create Mongo VM
```
gcloud compute instances create-with-container instance-1 \
--project=my-project --zone=us-central1-a --machine-type=f1-micro \
--network-interface=subnet=talk-subnet-poc,no-address \
--service-account={my-project}-compute@developer.gserviceaccount.com \
--boot-disk-size=10GB --container-image=us-central1-docker.pkg.dev/{my-project}/coral-talk/mongo \
--container-restart-policy=always
```

### Create Coral Talk Service in Cloud Run
```
gcloud run deploy coralproject \
--image=us-central1-docker.pkg.dev/{my-project}/coral-talk/talk \
--concurrency=80 \
--platform=managed \
--region=us-central1 \
--project=my-project
```

- Add environment variables `MONGODB_URI`, `REDIS_URI` and `SIGNING_SECRET`
- Add VPC Connector

### Access service url to config Coral Talkl
You've successfully deployed the Mongo and Coral Talk docker containers to Registry, configured your serverless instances to connect directly to your Virtual Private Cloud network, configured a Memorystore Redist instance, and set up a VM using the Mongo container, and deployed Coral Talk Service to Cloud Run.

## Cleaning up

To avoid incurring charges to your Google Cloud account for the resources used in this tutorial, you can delete the project.

Deleting a project has the following consequences:

- If you used an existing project, you'll also delete any other work that you've done in the project.
- You can't reuse the project ID of a deleted project. If you created a custom project ID that you plan to use in the
future, delete the resources inside the project instead. This ensures that URLs that use the project ID, such as
an `appspot.com` URL, remain available.

To delete a project, do the following:

1. In the Cloud Console, go to the [Projects page](https://console.cloud.google.com/iam-admin/projects).
2. In the project list, select the project you want to delete and click **Delete**.
3. In the dialog, type the project ID, and then click **Shut down** to delete the project.

## What's next


- Learn more about [Cloud developer tools](https://cloud.google.com/products/tools).
- Try out other Google Cloud features for yourself. Have a look at our [tutorials](https://cloud.google.com/docs/tutorials).
2 changes: 1 addition & 1 deletion tutorials/dataflow-dlp-to-datacatalog-tags/pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,7 @@
<protobuf-util.version>3.6.1</protobuf-util.version>
<protobuf.version>3.16.3</protobuf.version>
<findbugs.version>3.0.1</findbugs.version>
<guava.version>28.0-jre</guava.version>
<guava.version>32.0.0-jre</guava.version>
<json.version>20230227</json.version>
<junit.jupiter.version>5.5.2</junit.jupiter.version>
<gson.version>2.8.9</gson.version>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -304,7 +304,7 @@ private static Integer inspectSQLDb(
System.out.println();
System.out.print(String.format(">> [%s,%s:%s]: Starting Inspection", database.databaseInstanceDescription, database.databaseInstanceServer, database.databaseName));

//retreive the password from Secret Manager
//retrieve the password from Secret Manager
final String databasePassword = accessSecretVersion(ServiceOptions.getDefaultProjectId(),
database.getSecretManagerResourceName(),"latest");

Expand All @@ -323,7 +323,7 @@ private static Integer inspectSQLDb(
String dbVersion = String.format("%s[%s]", dbMetadata.getDatabaseProductName(),
dbMetadata.getDatabaseProductVersion());

// this will list out all tables in the curent schama
// this will list out all tables in the current schama
ResultSet ListTablesResults = dbMetadata
.getTables(conn.getCatalog(), null, "%", new String[]{"TABLE"});

Expand Down Expand Up @@ -540,10 +540,10 @@ public static String accessSecretVersion(String projectId, String secretId, Stri
}

/**
* Because this script may be connecting to mulitple JDBC drivers in the same run, this method helps ensure that the drivers are registered
* Because this script may be connecting to multiple JDBC drivers in the same run, this method helps ensure that the drivers are registered
*/
private static java.sql.Driver getJdbcDriver (String databaseType){
// Based on the SQL database type, reguster the driver. Note the pom.xml must have a
// Based on the SQL database type, register the driver. Note the pom.xml must have a
// matching driver for these to work. This addresses driver not found issues when
// trying to scan more than one JDBC type.
try {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -496,7 +496,7 @@ private static List getMaxRows(List rows, int startRow, int headerCount) throws
return subRows;
}

// this methods calcualtes the total bytes of a list of rows.
// this methods calculates the total bytes of a list of rows.
public static int getBytesFromList(List list) throws IOException {
java.io.ByteArrayOutputStream baos = new java.io.ByteArrayOutputStream();
java.io.ObjectOutputStream out = new java.io.ObjectOutputStream(baos);
Expand Down
2 changes: 1 addition & 1 deletion tutorials/gcp-cos-basic-fim/scan.sh
Original file line number Diff line number Diff line change
Expand Up @@ -79,7 +79,7 @@ mkdir -p $DATDIR $TMPDIR $LOGDIR

# Fail fast if already running
if [ -f "$LOCKFILE" ];then
echo "A scan is already in progess." | tee -a $LOGFILE
echo "A scan is already in progress." | tee -a $LOGFILE
exit
fi
touch $LOCKFILE
Expand Down
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
certifi==2022.12.7
certifi==2023.7.22
chardet==3.0.4
Click==7.0
Flask==1.0.2
Flask==2.3.2
gevent==1.4.0
greenlet==0.4.15
idna==2.8
Expand All @@ -12,7 +12,7 @@ MarkupSafe==1.1.1
msgpack==0.6.1
msgpack-python==0.5.6
pyzmq==18.0.1
requests==2.21.0
requests==2.31.0
six==1.12.0
urllib3==1.26.5
Werkzeug==2.2.3
Original file line number Diff line number Diff line change
Expand Up @@ -49,7 +49,7 @@ def read_dataset(data_dir, prefix, pattern, batch_size=512, eval=False):


def get_wide_deep():
# defin model inputs
# define model inputs
inputs = {}
inputs['is_male'] = layers.Input(shape=(), name='is_male', dtype='string')
inputs['plurality'] = layers.Input(shape=(), name='plurality', dtype='string')
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@

@app.route('/')
def index():
return 'A service to Submit a traing job for the babyweight-keras example. '
return 'A service to Submit a training job for the babyweight-keras example. '


@app.route('/api/v1/job/<string:job_id>', methods=['GET'])
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@

@app.route('/')
def index():
return 'A service to Submit a traing job for the babyweight-keras example. '
return 'A service to Submit a training job for the babyweight-keras example. '


@app.route('/api/v1/job/<string:job_id>', methods=['GET'])
Expand Down
2 changes: 1 addition & 1 deletion tutorials/pci-tokenizer/examples/requirements.txt
Original file line number Diff line number Diff line change
@@ -1 +1 @@
requests==2.27.1
requests==2.31.0
2 changes: 1 addition & 1 deletion tutorials/pci-tokenizer/index.js
Original file line number Diff line number Diff line change
@@ -1,2 +1,2 @@
// Boostrap for Cloud Functions
// Bootstrap for Cloud Functions
require('./src/server.js');
1 change: 1 addition & 0 deletions tutorials/pci-tokenizer/package.json
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,7 @@
"dependencies": {
"@google-cloud/dlp": "^1.9",
"@google-cloud/kms": "^1.6",
"body-parser": "^1.20.2",
"config": "^3.2",
"express": "^4.17",
"google-auth-library": "^5.7",
Expand Down
2 changes: 1 addition & 1 deletion tutorials/pci-tokenizer/src/app.js
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
/**
Main applicaiton script for the card data tokenizer. Called by server.js.
Main application script for the card data tokenizer. Called by server.js.
See ../index.md for usage info and Apache 2.0 license
*/
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -70,7 +70,7 @@ listen = 127.0.0.1:9000
; process.priority = -19

; Set the process dumpable flag (PR_SET_DUMPABLE prctl) even if the process user
; or group is differrent than the master process user. It allows to create process
; or group is different than the master process user. It allows to create process
; core dump and ptrace the process for the pool user.
; Default Value: no
; process.dumpable = yes
Expand Down Expand Up @@ -269,13 +269,13 @@ pm.max_spare_servers = 3
; %d: time taken to serve the request
; it can accept the following format:
; - %{seconds}d (default)
; - %{miliseconds}d
; - %{milliseconds}d
; - %{mili}d
; - %{microseconds}d
; - %{micro}d
; %e: an environment variable (same as $_ENV or $_SERVER)
; it must be associated with embraces to specify the name of the env
; variable. Some exemples:
; variable. Some examples:
; - server specifics like: %{REQUEST_METHOD}e or %{SERVER_PROTOCOL}e
; - HTTP headers like: %{HTTP_HOST}e or %{HTTP_USER_AGENT}e
; %f: script filename
Expand Down Expand Up @@ -374,7 +374,7 @@ pm.max_spare_servers = 3

; Redirect worker stdout and stderr into main error log. If not set, stdout and
; stderr will be redirected to /dev/null according to FastCGI specs.
; Note: on highloaded environement, this can cause some delay in the page
; Note: on highloaded environment, this can cause some delay in the page
; process time (several ms).
; Default Value: no
;catch_workers_output = yes
Expand Down
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
pylint==2.6.0
google-cloud==0.34.0
Flask==1.1.2
Flask==2.3.2
google-cloud-secret-manager==2.1.0
alpha-vantage==2.3.1
pandas==1.2.0
Expand Down
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
click==7.1.2
Flask==1.1.2
Flask==2.3.2
itsdangerous==1.1.0
Jinja2==2.11.3
MarkupSafe==1.1.1
Expand Down
4 changes: 2 additions & 2 deletions tutorials/serverless-static-ip/cloud-run/requirements.txt
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
Flask==1.1.2
requests==2.25.1
Flask==2.3.2
requests==2.31.0
gunicorn==20.0.4
2 changes: 0 additions & 2 deletions tutorials/writing-prometheus-metrics-bigquery/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -367,7 +367,5 @@ If you don't want to delete the project, you can delete the provisioned resource

## What's next

- Learn how to
[manage Cloud Monitoring dashboards with the Cloud Monitoring API](https://cloud.google.com/solutions/managing-monitoring-dashboards-automatically-using-the-api).
- Learn more about how to [export metrics from multiple projects](https://cloud.google.com/solutions/stackdriver-monitoring-metric-export).
- Try out other Google Cloud features for yourself. Have a look at those [tutorials](https://cloud.google.com/docs/tutorials).

0 comments on commit 3cd3593

Please sign in to comment.