Repository of my study from Gooogle Cloud Computing Foundations Certificate & Google Cloud Essentials.
Important
Copyright 2024 Google LLC All rights reserved. Google and the Google logo are trademarks of Google LLC. All other company and product names may be trademarks of the respective companies with which they are associated.
- Cloud Shell
- 1. Virtual Machine
- 2. NGINX Web Server
- 3. Kubernetes Engine
- 4. Set Up Network and HTTP Load Balancers
- 5. App Engine with Python
- 6. Cloud Functions
- 7. Cloud Storage: CLI/SDK
- 8. Cloud SQL for MySQL
- 9. Cloud Endpoints
Cloud Shell is a Debian-based virtual machine with a persistent 5-GB home directory, which makes it easy for you to manage your Google Cloud projects and resources.
sudo apt-get update
gcloud config set accessibility/screen_reader False
For more information see Enabling accessibility features.
gcloud auth list
Expected Output
ACTIVE: *
ACCOUNT: "ACCOUNT"
To set the active account, run:
$ gcloud config set account `ACCOUNT`
gcloud config list project
Expected Output
[core]
project = "PROJECT_ID"
gcloud config get-value project
gcloud compute project-info describe --project $(gcloud config get-value project)
Note
When the google-compute-default-region
and google-compute-default-zone
keys and values are missing from the output, no default zone or region is set.
gcloud config list
gcloud config list --all
gcloud components list
gcloud compute instances list --filter="name=('INSTANCE_NAME')"
gcloud compute firewall-rules list
gcloud compute firewall-rules list --filter="network='default'"
gcloud compute firewall-rules list --filter="NETWORK:'default' AND ALLOW:'icmp'"
gcloud logging logs list
gcloud logging logs list --filter="compute"
gcloud logging read "resource.type=gce_instance" --limit 5
gcloud logging read "resource.type=gce_instance AND labels.instance_name='<INSTANCE_NAME>'" --limit 5
Regions | Zones |
---|---|
Regions are collections of zones. | A zone is a deployment area within a region. |
Zones have high-bandwidth, low-latency network connections to other zones in the same region. | The fully-qualified name for a zone is made up of <region>-<zone> . |
gcloud config set compute/region <REGION>
gcloud config set compute/zone <ZONE>
After set the default region and zone, you don't have to append the
--zone
flag every time.
export REGION=<REGION>
export ZONE=<ZONE>
export ZONE=$(gcloud config get-value compute/zone)
export PROJECT_ID=$(gcloud config get-value project)
To use the variable:
$VARIABLE
echo -e "PROJECT ID: $PROJECT_ID\nZONE: $ZONE"
Note
When you run gcloud on your own machine, the config settings are persisted across sessions. But in Cloud Shell, you need to set this for every new session or reconnection.
Compute Engine allows you to create virtual machines (VMs) that run different operating systems, including multiple flavors of Linux (Debian, Ubuntu, Suse, Red Hat, CoreOS) and Windows Server, on Google infrastructure.
gcloud compute instances create <INSTANCE_NAME> --machine-type <MACHINE_TYPE> --zone=$ZONE
Expected Output
Created [..."INSTANCE_NAME"].
NAME: "INSTANCE_NAME"
ZONE: "ZONE"
MACHINE_TYPE: "MACHINE_TYPE"
PREEMPTIBLE:
INTERNAL_IP: 10.128.0.3
EXTERNAL_IP: 34.136.51.150
STATUS: RUNNING
Where
gcloud compute
allows you to manage your Compute Engine resources in a format that's simpler than the Compute Engine API.
instances create
creates a new instance.
The --machine-type
flag specifies the machine type.
The --zone
flag specifies where the VM is created.
If you omit the --zone
flag, the gcloud tool can infer your desired zone based on your default properties. Other required instance settings, such as machine type and image, are set to default values if not specified in the create command.
gcloud compute instances list
gcloud compute instances create --help
Press Enter or the spacebar to scroll through the help content.
To exit the content, type Q.
To exit help, press CTRL + C.
To create a Windows Server, follow these steps while creating a Virtual Machine in
- Cloud Console:
- In the Boot disk section, click Change to begin configuring your boot disk.
- Under Operating system select Windows Server
- Cloud Shell
gcloud compute instances create <INSTANCE_NAME> \
--image-project windows-cloud \
--image-family <IMAGE_FAMILY> \
--machine-type <MACHINE_TYPE> \
--boot-disk-size <BOOT_DISK_SIZE> \
--boot-disk-type <[BOOT_DISK_TYPE>
Where
<INSTANCE_NAME>
is the name for the new instance.
<IMAGE_FAMILY>
is one of the public image families for Windows Server images.
<MACHINE_TYPE>
is one of the available machine types.
<BOOT_DISK_SIZE>
is the size of the boot disk in GB. Larger persistent disks have higher throughput.
<BOOT_DISK_TYPE>
is the type of the boot disk for your instance. For example, pd-ssd.
gcloud compute instances get-serial-port-output <INSTANCE_NAME> --zone=$ZONE
If prompted, type
N
and press ENTER.Repeat the command until you see the following in the command output:
Instance setup finished. instance is ready to use.
gcloud compute reset-windows-password <INSTANCE_NAME> --zone $ZONE --user <USERNAME>
If asked
Would you like to set or reset the password for [admin] (Y/n)?
, enterY
.Record the password for use in later steps to connect.
Through an RDP app already installed on your computer (Enter the external IP of your VM) or through RDP directly from the Chrome browser using the Spark View extension (Use your Windows username admin and password you previously recorded).
gcloud compute ssh <INSTANCE_NAME> --zone=$ZONE
When prompted
Do you want to continue? (Y/n)
typeY
.Disconnect from SSH by exiting from the remote shell:
exit
.To leave the passphrase empty, press Enter twice.
Note
You have connected to a virtual machine and notice how the command prompt changed?
The prompt now says something similar to sa_107021519685252337470@<INSTANCE_NAME>
.
The reference before the @
indicates the account being used.
After the @
sign indicates the host machine being accessed.
ssh username@hostname
sudo apt-get install -y nginx
Expected Output
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following additional packages will be installed:
...
ps auwx | grep nginx
Expected Output
root 2330 0.0 0.0 159532 1628 ? Ss 14:06 0:00 nginx: master process /usr/sbin/nginx -g daemon on; master_process on;
www-data 2331 0.0 0.0 159864 3204 ? S 14:06 0:00 nginx: worker process
www-data 2332 0.0 0.0 159864 3204 ? S 14:06 0:00 nginx: worker process
root 2342 0.0 0.0 12780 988 pts/0 S+ 14:07 0:00 grep nginx
http://EXTERNAL_IP/
gcloud compute firewall-rules list
Note
Communication with the virtual machine will fail as it does not have an appropriate firewall rule. The nginx web server is expecting to communicate on tcp:80. To get communication working you need to:
-
Add a tag to the virtual machine
-
Add a firewall rule for http traffic
gcloud compute instances add-tags <INSTANCE_NAME> --tags http-server,https-server
gcloud compute firewall-rules create <default-allow-http> --direction=INGRESS --priority=1000 --network=default --action=ALLOW --rules=tcp:80 --source-ranges=0.0.0.0/0 --target-tags=http-server
gcloud compute firewall-rules list --filter=ALLOW:'80'
Expected Output
NAME NETWORK DIRECTION PRIORITY ALLOW DENY DISABLED
<default-allow-http> default INGRESS 1000 tcp:80 False
curl http://$(gcloud compute instances list --filter=name:<INSTANCE_NAME> --format='value(<EXTERNAL_IP>)')
You will see the default nginx output.
Google Kubernetes Engine (GKE) provides a managed environment for deploying, managing, and scaling your containerized applications using Google infrastructure. The GKE environment consists of multiple machines (specifically Compute Engine instances) grouped to form a container cluster.
gcloud container clusters create --machine-type=<MACHINE_TYPE> --zone=$ZONE <CLUSTER_NAME>
A cluster consists of at least one cluster master machine and multiple worker machines called nodes. Nodes are Compute Engine virtual machine (VM) instances that run the Kubernetes processes necessary to make them part of the cluster.
You can ignore any warnings in the output. It might take several minutes to finish creating the cluster.
Note
Cluster names must start with a letter and end with an alphanumeric, and cannot be longer than 40 characters.
Expected Output
NAME: <CLUSTER_NAME>
LOCATION: <ZONE>
MASTER_VERSION: 1.22.8-gke.202
MASTER_IP: 34.67.240.12
MACHINE_TYPE: <MACHINE_TYPE>
NODE_VERSION: 1.22.8-gke.202
NUM_NODES: 3
STATUS: RUNNING
gcloud container clusters get-credentials lab-cluster
After creating your cluster, you need authentication credentials to interact with it.
Expected Output
Fetching cluster endpoint and auth data.
kubeconfig entry generated for my-cluster.
GKE uses Kubernetes objects to create and manage your cluster's resources. Kubernetes provides the Deployment object for deploying stateless applications like web servers. Service objects define rules and load balancing for accessing your application from the internet.
kubectl create deployment hello-server --image=gcr.io/google-samples/hello-app:1.0
Expected Output
deployment.apps/hello-server created
This Kubernetes command creates a deployment object that represents
hello-server
.In this case,
--image
specifies a container image to deploy. The command pulls the example image from a Container Registry bucket.
gcr.io/google-samples/hello-app:1.0
indicates the specific image version to pull. If a version is not specified, the latest version is used.
This is a Kubernetes resource that lets you expose your application to external traffic
kubectl expose deployment hello-server --type=LoadBalancer --port 8080
Expected Output
service/hello-server exposed
Where
--port
specifies the port that the container exposes.
type="LoadBalancer"
creates a Compute Engine load balancer for your container.
kubectl get service
Expected Output
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-server loadBalancer 10.39.244.36 35.202.234.26 8080:31991/TCP 65s
kubernetes ClusterIP 10.39.240.1 433/TCP 5m13s
Note
It might take a minute for an external IP address to be generated.
Run the previous command again if the EXTERNAL-IP column status is pending.
Open a new tab and enter the following address, replacing
<EXTERNAL-IP>
with the EXTERNAL-IP for hello-server.
http://<EXTERNAL-IP>:8080
Expected Output
The browser tab displays the message Hello, world! as well as the version and hostname.
gcloud container clusters delete lab-cluster
When prompted, type
Y
to confirm.Deleting the cluster can take a few minutes.
There are several ways you can load balance on Google Cloud. Here you will set up the following load balancers:
- Network Load Balancer
- HTTP(s) Load Balancer
gcloud compute instances create <www1> \
--zone=$ZONE \
--tags=network-lb-tag \
--machine-type=e2-small \
--image-family=debian-11 \
--image-project=debian-cloud \
--metadata=startup-script='#!/bin/bash
apt-get update
apt-get install apache2 -y
service apache2 restart
echo "
<h3>Web Server: <www1></h3>" | tee /var/www/html/index.html'
For this load balancing scenario, three Compute Engine VM instances were created, and Apache was installed on each of them.
gcloud compute firewall-rules create <www-firewall-network-lb> \
--target-tags network-lb-tag --allow tcp:80
Now you need to get the external IP addresses of your instances and verify that they are running.
gcloud compute instances list
curl http://<IP_ADDRESS>
Replace <IP_ADDRESS> with the IP address for each of your VMs.
When you configure the load balancing service, your virtual machine instances receives packets that are destined for the static external IP address you configure. Instances made with a Compute Engine image are automatically configured to handle this IP address.
Note
Learn more about how to set up network load balancing from the External TCP/UDP Network Load Balancing overview Guide.
gcloud compute addresses create <network-lb-ip-1> \
--region $REGION
Expected Output
Created [https://www.googleapis.com/compute/v1/projects/qwiklabs-gcp-03-xxxxxxxxxxx/regions//addresses/network-lb-ip-1].
gcloud compute http-health-checks create <basic-check>
gcloud compute target-pools create <www-pool> \
--region $REGION --http-health-check <basic-check>
gcloud compute target-pools add-instances <www-pool> \
--instances <www1>,<www2>,<www3>
gcloud compute forwarding-rules create <www-rule> \
--region $REGION \
--ports 80 \
--address <network-lb-ip-1> \
--target-pool <www-pool>
gcloud compute forwarding-rules describe <www-rule> --region $REGION
IPADDRESS=$(gcloud compute forwarding-rules describe <www-rule> --region $REGION --format="json" | jq -r .IPAddress)
echo $IPADDRESS
while true; do curl -m1 $IPADDRESS; done
Replacing
IP_ADDRESS
with an external IP address from the previous commandUse Ctrl + C to stop running the command.
Note
The response from the curl
command alternates randomly among the three instances. If your response is initially unsuccessful, wait approximately 30 seconds for the configuration to be fully loaded and for your instances to be marked healthy before trying again.
HTTP(S) Load Balancing is implemented on Google Front End (GFE). GFEs are distributed globally and operate together using Google's global network and control plane. You can configure URL rules to route some URLs to one set of instances and route other URLs to other instances.
Requests are always routed to the instance group that is closest to the user, if that group has enough capacity and is appropriate for the request. If the closest group does not have enough capacity, the request is sent to the closest group that does have capacity.
Important
To set up a load balancer with a Compute Engine backend, your VMs need to be in an instance group. The managed instance group provides VMs running the backend servers of an external HTTP load balancer.
gcloud compute instance-templates create <lb-backend-template> \
--region=REGION \
--network=default \
--subnet=default \
--tags=allow-health-check \
--machine-type=e2-medium \
--image-family=debian-11 \
--image-project=debian-cloud \
--metadata=startup-script='#!/bin/bash
apt-get update
apt-get install apache2 -y
a2ensite default-ssl
a2enmod ssl
vm_hostname="$(curl -H "Metadata-Flavor:Google" \
http://169.254.169.254/computeMetadata/v1/instance/name)"
echo "Page served from: $vm_hostname" | \
tee /var/www/html/index.html
systemctl restart apache2'
Managed instance groups (MIGs) let you operate apps on multiple identical VMs.
You can make your workloads scalable and highly available by taking advantage of automated MIG services, including: autoscaling, autohealing, regional (multiple zone) deployment, and automatic updating.
gcloud compute instance-groups managed create <lb-backend-group> \
--template=<lb-backend-template> --size=2 --zone=ZONE
gcloud compute firewall-rules create <fw-allow-health-check> \
--network=default \
--action=allow \
--direction=ingress \
--source-ranges=130.211.0.0/22,35.191.0.0/16 \
--target-tags=allow-health-check \
--rules=tcp:80
Note
The ingress rule allows traffic from the Google Cloud health checking systems (130.211.0.0/22 and 35.191.0.0/16).
gcloud compute addresses create <lb-ipv4-1> \
--ip-version=IPV4 \
--global
gcloud compute addresses describe <lb-ipv4-1> \
--format="get(address)" \
--global
gcloud compute health-checks create <http http-basic-check> \
--port 80
Note
Google Cloud provides health checking mechanisms that determine whether backend instances respond properly to traffic. For more information, please refer to the Creating health checks document.
gcloud compute backend-services create <web-backend-service> \
--protocol=HTTP \
--port-name=http \
--health-checks=<http-basic-check> \
--global
gcloud compute backend-services add-backend <web-backend-service> \
--instance-group=<lb-backend-group> \
--instance-group-zone=ZONE \
--global
gcloud compute url-maps create <web-map-http> \
--default-service <web-backend-service>
Note
URL map is a Google Cloud configuration resource used to route requests to backend services or backend buckets.
For example, with an external HTTP(S) load balancer, you can use a single URL map to route requests to different destinations based on the rules configured in the URL map:
-
Requests for https://example.com/video go to one backend service.
-
Requests for https://example.com/audio go to a different backend service.
-
Requests for https://example.com/images go to a Cloud Storage backend bucket.
-
Requests for any other host and path combination go to a default backend service.
gcloud compute target-http-proxies create <http-lb-proxy> \
--url-map <web-map-http>
gcloud compute forwarding-rules create <http-content-rule> \
--address=<lb-ipv4-1>\
--global \
--target-http-proxy=<http-lb-proxy> \
--ports=80
Note
A forwarding rule and its corresponding IP address represent the frontend configuration of a Google Cloud load balancer. Learn more about the general understanding of forwarding rules from the Forwarding rule overview Guide.
- In the Google Cloud console, from the Navigation menu, go to Network services > Load balancing.
- Click on the load balancer that you just created (
web-map-http
). - In the Backend section, click on the name of the backend and confirm that the VMs are Healthy. If they are not healthy, wait a few moments and try reloading the page.
- When the VMs are healthy, test the load balancer using a web browser, going to
http://IP_ADDRESS/
, replacingIP_ADDRESS
with the load balancer's IP address.
This may take three to five minutes. If you do not connect, wait a minute, and then reload the browser.
Your browser should render a page with content showing the name of the instance that served the page, along with its zone (for example, Page served from:
lb-backend-group-xxxx
).
App Engine allows developers to focus on doing what they do best, writing code, and not what it runs on. The notion of servers, virtual machines, and instances have been abstracted away, with App Engine providing all the compute necessary. Developers don't have to worry about operating systems, web servers, logging, monitoring, load-balancing, system administration, or scaling, as App Engine takes care of all that.
The App Engine Admin API enables developers to provision and manage their App Engine Applications.
- In the left Navigation menu, click APIs & Services > Library.
- Type "App Engine Admin API" in the search box.
- Click the App Engine Admin API card.
If there is no prompt to enable the API, then it is already enabled and no action is needed.
There is a simple Hello World app for Python you can use to quickly get a feel for deploying an app to Google Cloud.
git clone https://github.com/GoogleCloudPlatform/python-docs-samples.git
cd python-docs-samples/appengine/standard_python3/hello_world
Test the application using the Google Cloud development server (dev_appserver.py), which is included with the preinstalled App Engine SDK.
From within your helloworld directory where the app's app.yaml configuration file is located
dev_appserver.py app.yaml
The development server is now running and listening for requests on port 8080.
View the results by clicking the Web preview > Preview on port 8080.
You'll see a "Hello World!" in a new browser window.
You can leave the development server running while you develop your application. The development server watches for changes in your source files and reloads them if necessary.
cd python-docs-samples/appengine/standard_python3/hello_world
nano main.py
Change "Hello World!" to "Hello, Cruel World!". Save the file with CTRL-S and exit with CTRL-X.
Browser window with Hello, Cruel World! on the page.
From within the root directory of your application where the app.yaml file is located
gcloud app deploy
The App Engine application will then be created.
Expected Output
Creating App Engine application in project [qwiklabs-gcp-233dca09c0ab577b] and region ["REGION"]....done.
Services to deploy:
descriptor: [/home/gcpstaging8134_student/python-docs-samples/appengine/standard/hello_world/app.yaml]
source: [/home/gcpstaging8134_student/python-docs-samples/appengine/standard/hello_world]
target project: [qwiklabs-gcp-233dca09c0ab577b]
target service: [default]
target version: [20171117t072143]
target url: [https://qwiklabs-gcp-233dca09c0ab577b.appspot.com]
Do you want to continue (Y/n)?
Enter Y when prompted to confirm the details and begin the deployment of service.
Expected Output
Beginning deployment of service [default]...
Some files were skipped. Pass `--verbosity=info` to see which ones.
You may also view the gcloud log file, found at
[/tmp/tmp.dYC7xGu3oZ/logs/2017.11.17/07.18.27.372768.log].
╔════════════════════════════════════════════════════════════╗
╠═ Uploading 5 files to Google Cloud Storage ═╣
╚════════════════════════════════════════════════════════════File upload done.
Updating service [default]...done.
Waiting for operation [apps/qwiklabs-gcp-233dca09c0ab577b/operations/2e88ab76-33dc-4aed-93c4-fdd944a95ccf] to complete...done.
Updating service [default]...done.
Deployed service [default] to [https://qwiklabs-gcp-233dca09c0ab577b.appspot.com]
You can stream logs from the command line by running:
$ gcloud app logs tail -s default
To view your application in the web browser run:
$ gcloud app browse
Note
If you receive an error as "Unable to retrieve P4SA" while deploying the app, then re-run the above command.
gcloud app browse
Click on the link it provides:
Expected Output
Did not detect your browser. Go to this link to view your app:
https://qwiklabs-gcp-233dca09c0ab577b.appspot.com
Your application is deployed and you can read the short message in your browser.
A cloud function is a piece of code that runs in response to an event, such as an HTTP request, a message from a messaging service, or a file upload. Cloud events are things that happen in your cloud environment.
This function writes a message to the Cloud Functions logs.
It is triggered by cloud function events and accepts a callback function used to signal completion of the function.
The cloud function event is a cloud pub/sub topic event. A pub/sub is a messaging service where the senders of messages are decoupled from the receivers of messages. When a message is sent or posted, a subscription is required for a receiver to be alerted and receive the message. To learn more: A Google-Scale Messaging Service.
gcloud config set compute/region <REGION>
mkdir gcf_hello_world
cd gcf_hello_world
nano index.js
/**
* Background Cloud Function to be triggered by Pub/Sub.
* This function is exported by index.js, and executed when
* the trigger topic receives a message.
*
* @param {object} data The event payload.
* @param {object} context The event metadata.
*/
exports.helloWorld = (data, context) => {
const pubSubMessage = data;
const name = pubSubMessage.data
? Buffer.from(pubSubMessage.data, 'base64').toString() : "Hello World";
console.log(`My Cloud Function: ${name}`);
};
Exit nano (Ctrl+x) and save (Y) the file.
gsutil mb -p <PROJECT_ID> gs://<BUCKET_NAME>
When deploying a new function, you must specify
--trigger-topic
,--trigger-bucket
, or--trigger-http
. When deploying an update to an existing function, the function keeps the existing trigger unless otherwise specified.
gcloud services disable cloudfunctions.googleapis.com
gcloud services enable cloudfunctions.googleapis.com
gcloud projects add-iam-policy-binding <PROJECT_ID> \
--member="serviceAccount:<PROJECT_ID>@appspot.gserviceaccount.com" \
--role="roles/artifactregistry.reader"
gcloud functions deploy helloWorld \
--stage-bucket <BUCKET_NAME> \
--trigger-topic hello_world \
--runtime nodejs20
Note
If you get OperationError, ignore the warning and re-run the command.
If prompted, enter Y to allow unauthenticated invocations of a new function.
gcloud functions describe helloWorld
An ACTIVE status indicates that the function has been deployed.
Every message published in the topic triggers function execution, the message contents are passed as input data.
After you deploy the function and know that it's active, test that the function writes a message to the cloud log after detecting an event.
DATA=$(printf 'Hello World!'|base64) && gcloud functions call helloWorld --data '{"data":"'$DATA'"}'
The cloud tool returns the execution ID for the function, which means a message has been written in the log.
Example Output
executionId: 3zmhpf7l6j5b
View logs to confirm that there are log messages with that execution ID.
gcloud functions logs read helloWorld
Example Output
LEVEL: D
NAME: helloWorld
EXECUTION_ID: 4bgl3jw2a9i3
TIME_UTC: 2023-03-23 13:45:31.545
LOG: Function execution took 912 ms, finished with status: 'ok'
LEVEL: I
NAME: helloWorld
EXECUTION_ID: 4bgl3jw2a9i3
TIME_UTC: 2023-03-23 13:45:31.533
LOG: My Cloud Function: Hello World!
LEVEL: D
NAME: helloWorld
EXECUTION_ID: 4bgl3jw2a9i3
TIME_UTC: 2023-03-23 13:45:30.633
LOG: Function execution started
Note
The logs will take around 10 mins to appear. Also, the alternative way to view the logs is, go to Logging > Logs Explorer.
Your application is deployed, tested, and you can view the logs.
Cloud Storage allows world-wide storage and retrieval of any amount of data at any time. You can use Cloud Storage for a range of scenarios including serving website content, storing data for archival and disaster recovery, or distributing large data objects to users via direct download. It's used for Unstructured Data. Organized in buckets.
Bucket naming rules
- Do not include sensitive information in the bucket name, because the bucket namespace is global and publicly visible.
- Bucket names must contain only lowercase letters, numbers, dashes (-), underscores (_), and dots (.). Names containing dots require verification.
- Bucket names must start and end with a number or letter.
- Bucket names must contain 3 to 63 characters. Names containing dots can contain up to 222 characters, but each dot-separated component can be no longer than 63 characters.
- Bucket names cannot be represented as an IP address in dotted-decimal notation (for example, 192.168.5.4).
- Bucket names cannot begin with the "goog" prefix.
- Bucket names cannot contain "google" or close misspellings of "google".
- For DNS compliance and future compatibility, you should not use underscores (_) or have a period adjacent to another period or dash. For example, ".." or "-." or ".-" are not valid in DNS names.
mb
make bucket commnadRemember use the Bucket naming rules
gsutil mb gs://<YOUR-BUCKET-NAME>
This command is creating a bucket with default settings.
To see default settings, use the Cloud console Navigation menu > Cloud Storage, then click on your bucket name, and click on the Configuration tab.
Note
If the bucket name is already taken, either by you or someone else, try again with a different bucket name.
curl https://upload.wikimedia.org/wikipedia/commons/thumb/a/a4/Ada_Lovelace_portrait.jpg/800px-Ada_Lovelace_portrait.jpg --output ada.jpg
gsutil cp ada.jpg gs://YOUR-BUCKET-NAME
Note
When typing your bucket name, you can use the tab key to autocomplete it.
You can see the image load into your bucket from the command line.
rm ada.jpg
gsutil cp -r gs://YOUR-BUCKET-NAME/ada.jpg .
Expected Output
Copying gs://YOUR-BUCKET-NAME/ada.jpg...
/ [1 files][360.1 KiB/2360.1 KiB]
Operation completed over 1 objects/360.1 KiB.
gsutil cp gs://YOUR-BUCKET-NAME/ada.jpg gs://YOUR-BUCKET-NAME/image-folder/
[!Note[ Compared to local file systems, folders in Cloud Storage have limitations, but many of the same operations are supported.
Expected Output
Copying gs://YOUR-BUCKET-NAME/ada.jpg [Content-Type=image/png]...
- [1 files] [ 360.1 KiB/ 360.1 KiB]
Operation completed over 1 objects/360.1 KiB
gsutil ls gs://YOUR-BUCKET-NAME
Expected Output
gs://YOUR-BUCKET-NAME/ada.jpg
gs://YOUR-BUCKET-NAME/image-folder/
gsutil ls -l gs://YOUR-BUCKET-NAME/ada.jpg
Expected Output
306768 2017-12-26T16:07:570Z gs://YOUR-BUCKET-NAME/ada.jpg
TOTAL: 1 objects, 30678 bytes (360.1 KiB)
acl
Acces Control list. Mechanism you can use to define who has acces to your bucket and objects.
gsutil acl ch -u AllUsers:R gs://YOUR-BUCKET-NAME/ada.jpg
Expected Output
Updated ACL on gs://YOUR-BUCKET-NAME/ada.jpg
Your image is now public, and can be made available to anyone.
Go to Navigation menu > Cloud Storage, then click on the name of your bucket.
You should see your image with the Public link box. Click the Copy URL and open the URL in a new browser tab.
gsutil acl ch -d AllUsers gs://YOUR-BUCKET-NAME/ada.jpg
Expected Output
Updated ACL on gs://YOUR-BUCKET-NAME/ada.jpg
Verify that you've removed public access by clicking the Refresh button in the console. The checkmark will be removed.
gsutil rm gs://YOUR-BUCKET-NAME/ada.jpg
Expected Output
Removing gs://YOUR-BUCKET-NAME/ada.jpg...
Refresh the console. The copy of the image file is no longer stored on Cloud Storage (though the copy you made in the image-folder/ folder still exists).
- From the Navigation menu > SQL.
- Click Create Instance.
- Choose MySQL database engine.
- Enter Instance ID as
myinstance
. - In the password field click on the Generate link and the eye icon to see the password. Save the password.
- Select the database version as MySQL 8.
- For Choose a Cloud SQL edition, select Enterprise edition.
- For Preset choose Development (4 vCPU, 16 GB RAM, 100 GB Storage, Single zone).
Warning
If you choose a preset larger than Development, your project will be flagged and your lab will be terminated.
- Set Region as <REGION>.
- Set the Multi zones (Highly available) > Primary Zone field as .
- Click CREATE INSTANCE.
It might take a few minutes for the instance to be created. Once it is, you will see a green checkmark next to the instance name.
- Click on the Cloud SQL instance. The SQL Overview page opens.
gcloud sql connect <myinstance> --user=root
Enter your root password when prompted. Note: The cursor will not move.
Press the Enter key when you're done typing.
You should now see the
mysql
prompt.
CREATE DATABASE guestbook;
USE guestbook;
CREATE TABLE entries (guestName VARCHAR(255), content VARCHAR(255),
entryID INT NOT NULL AUTO_INCREMENT, PRIMARY KEY(entryID));
INSERT INTO entries (guestName, content) values ("first guest", "I got here!");
INSERT INTO entries (guestName, content) values ("second guest", "Me too!");
SELECT * FROM entries;
Expected Output
+--------------+-------------------+---------+
| guestName | content | entryID |
+--------------+-------------------+---------+
| first guest | I got here! | 1 |
| second guest | Me too! | 2 |
+--------------+-------------------+---------+
2 rows in set (0.00 sec)
mysql>
gsutil cp gs://spls/gsp164/endpoints-quickstart.zip .
unzip endpoints-quickstart.zip
cd endpoints-quickstart
To publish a REST API to Endpoints, an OpenAPI configuration file that describes the API is required.
In the endpoints-qwikstart directory.
cd scripts
./deploy_api.sh
Cloud Endpoints uses the host field in the OpenAPI configuration file to identify the service.
When you prepare an OpenAPI configuration file for your own service, you will need to sets the ID of your Cloud project as part of the name configured in the host field.
The script then deploys the OpenAPI configuration to Service Management using the command:
gcloud endpoints services deploy openapi.yaml
As it is creating and configuring the service, Service Management outputs some information to the console. You can safely ignore the warnings about the paths in openapi.yaml not requiring an API key.
Expected Output
Service Configuration [2017-02-13-r2] uploaded for service [airports-api.endpoints.example-project.cloud.goog]
To deploy the API backend, make sure you are in the endpoints-quickstart/scripts directory.
./deploy_app.sh ../app/app_template.yaml
The script runs the following command to create an App Engine flexible environment in the <REGION> region:
gcloud app create --region="$REGION"
It takes a couple minutes to create the App Engine flexible backend.
Note
If you get an ERROR: NOT_FOUND: Unable to retrieve P4SA: from GAIA
message, rerun the deploy_app.sh
script.
You'll see the following displayed in Cloud Shell after the App Engine is created:
Success! The app is now created. Please use `gcloud app deploy` to deploy your first app.
The script goes on to run the gcloud app deploy
command to deploy the sample API to App Engine.
Expected Output
Deploying ../app/app_template.yaml...You are about to deploy the following services:
It takes several minutes for the API to be deployed to App Engine.
You'll see a line like the following when the API is successfully deployed to App Engine:
Expected Output
Deployed service [default] to [https://example-project.appspot.com]
./query_api.sh
The script echoes the curl command that it uses to send a request to the API, and then displays the result. You'll see something like the following in Cloud Shell:
curl "https://example-project.appspot.com/airportName?iataCode=SFO"
San Francisco International Airport
The API expects one query parameter,
iataCode
, that is set to a valid IATA airport code such as SEA or JFK.
./query_api.sh JFK
With APIs deployed with Cloud Endpoints, you can monitor critical operations metrics in the Cloud Console and gain insight into your users and usage with Cloud Logging.
./generate_traffic.sh
Note
This script generates requests in a loop and automatically times out in 5 minutes. To end the script sooner, enter CTRL+C in Cloud Shell.
In the Console:
Navigation menu > Endpoints > Services and click Airport Codes service.
It may take a few moments for the requests to be reflected in the graphs.
- Permissions Panel:
The Permissions panel allows you to control who has access to your API and the level of access.
- Deployment history tab:
This tab displays a history of your API deployments, including the deployment time and who deployed the change.
- Overview tab:
Here you'll see the traffic coming in. After the traffic generation script has been running for a minute, scroll down to see the three lines on the Total latency graph (50th, 95th, and 99th percentiles). This data provides a quick estimate of response times.
At the bottom of the Endpoints graphs, under Method, click the View logs link for GET/airportName. The** Logs Viewer** page displays the request logs for the API.
Enter CTRL+C in Cloud Shell to stop the script.
Note
This is a beta release of Quotas. This feature might be changed in backward-incompatible ways and is not subject to any SLA or deprecation policy.
Cloud Endpoints lets you set quotas so you can control the rate at which applications can call your API. Quotas can be used to protect your API from excessive usage by a single client.
./deploy_api.sh ../openapi_with_ratelimit.yaml
This may take a few minutes
./deploy_app.sh ../app/app_template.yaml
- In the Console, navigate to Navigation menu > APIs & Services > Credentials.
- Click Create credentials and choose API key. A new API key is displayed on the screen.
- Click the Copy to clipboard icon to copy it to your clipboard.
Replace YOUR-API-KEY with the API key you just created
export API_KEY=YOUR-API-KEY
./query_api_with_key.sh $API_KEY
Expected Output
curl -H 'x-api-key: AIzeSyDbdQdaSdhPMdiAuddd_FALbY7JevoMzAB' "https://example-project.appspot.com/airportName?iataCode=SFO"
San Francisco International Airport
The API now has a limit of 5 requests per second.
./generate_traffic_with_key.sh $API_KEY
After running the script for 5-10 seconds, enter CTRL+C in Cloud Shell to stop the script.
./query_api_with_key.sh $API_KEY
Expected Output
{
"code": 8,
"message": "Insufficient tokens for quota 'airport_requests' and limit 'limit-on-airport-requests' of service 'example-project.appspot.com' for consumer 'api_key:AIzeSyDbdQdaSdhPMdiAuddd_FALbY7JevoMzAB'.",
"details": [
{
"@type": "type.googleapis.com/google.rpc.DebugInfo",
"stackEntries": [],
"detail": "internal"
}
]
}
If you get a different response, try running the generate_traffic_with_key.sh script again and retry.