Skip to content

Commit

Permalink
Generate readmes for most service samples [(#599)](GoogleCloudPlatfor…
Browse files Browse the repository at this point in the history
  • Loading branch information
Jon Wayne Parrott authored and shollyman committed Jul 22, 2020
1 parent bd0da61 commit 28704e0
Show file tree
Hide file tree
Showing 5 changed files with 377 additions and 7 deletions.
5 changes: 0 additions & 5 deletions samples/snippets/README.md

This file was deleted.

332 changes: 332 additions & 0 deletions samples/snippets/README.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,332 @@
.. This file is automatically generated. Do not edit this file directly.
Google BigQuery Python Samples
===============================================================================

This directory contains samples for Google BigQuery. `Google BigQuery`_ is Google's fully managed, petabyte scale, low cost analytics data warehouse. BigQuery is NoOps—there is no infrastructure to manage and you don't need a database administrator—so you can focus on analyzing data to find meaningful insights, use familiar SQL, and take advantage of our pay-as-you-go model.




.. _Google BigQuery: https://cloud.google.com/bigquery/docs

Setup
-------------------------------------------------------------------------------


Authentication
++++++++++++++

Authentication is typically done through `Application Default Credentials`_,
which means you do not have to change the code to authenticate as long as
your environment has credentials. You have a few options for setting up
authentication:

#. When running locally, use the `Google Cloud SDK`_

.. code-block:: bash
gcloud beta auth application-default login
#. When running on App Engine or Compute Engine, credentials are already
set-up. However, you may need to configure your Compute Engine instance
with `additional scopes`_.

#. You can create a `Service Account key file`_. This file can be used to
authenticate to Google Cloud Platform services from any environment. To use
the file, set the ``GOOGLE_APPLICATION_CREDENTIALS`` environment variable to
the path to the key file, for example:

.. code-block:: bash
export GOOGLE_APPLICATION_CREDENTIALS=/path/to/service_account.json
.. _Application Default Credentials: https://cloud.google.com/docs/authentication#getting_credentials_for_server-centric_flow
.. _additional scopes: https://cloud.google.com/compute/docs/authentication#using
.. _Service Account key file: https://developers.google.com/identity/protocols/OAuth2ServiceAccount#creatinganaccount

Install Dependencies
++++++++++++++++++++

#. Install `pip`_ and `virtualenv`_ if you do not already have them.

#. Create a virtualenv. Samples are compatible with Python 2.7 and 3.4+.

.. code-block:: bash
$ virtualenv env
$ source env/bin/activate
#. Install the dependencies needed to run the samples.

.. code-block:: bash
$ pip install -r requirements.txt
.. _pip: https://pip.pypa.io/
.. _virtualenv: https://virtualenv.pypa.io/

Samples
-------------------------------------------------------------------------------

Quickstart
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++



To run this sample:

.. code-block:: bash
$ python quickstart.py
Sync query
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++



To run this sample:

.. code-block:: bash
$ python sync_query.py
usage: sync_query.py [-h] query
Command-line application to perform synchronous queries in BigQuery.
For more information, see the README.md under /bigquery.
Example invocation:
$ python sync_query.py \
'SELECT corpus FROM `publicdata.samples.shakespeare` GROUP BY corpus'
positional arguments:
query BigQuery SQL Query.
optional arguments:
-h, --help show this help message and exit
Async query
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++



To run this sample:

.. code-block:: bash
$ python async_query.py
usage: async_query.py [-h] query
Command-line application to perform asynchronous queries in BigQuery.
For more information, see the README.md under /bigquery.
Example invocation:
$ python async_query.py 'SELECT corpus FROM `publicdata.samples.shakespeare` GROUP BY corpus'
positional arguments:
query BigQuery SQL Query.
optional arguments:
-h, --help show this help message and exit
Snippets
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++



To run this sample:

.. code-block:: bash
$ python snippets.py
usage: snippets.py [-h] [--project PROJECT]
{list-datasets,list-tables,create-table,list-rows,copy-table,delete-table}
...
Samples that demonstrate basic operations in the BigQuery API.
For more information, see the README.md under /bigquery.
Example invocation:
$ python snippets.py list-datasets
The dataset and table should already exist.
positional arguments:
{list-datasets,list-tables,create-table,list-rows,copy-table,delete-table}
list-datasets Lists all datasets in a given project. If no project
is specified, then the currently active project is
used
list-tables Lists all of the tables in a given dataset. If no
project is specified, then the currently active
project is used.
create-table Creates a simple table in the given dataset. If no
project is specified, then the currently active
project is used.
list-rows Prints rows in the given table. Will print 25 rows at
most for brevity as tables can contain large amounts
of rows. If no project is specified, then the
currently active project is used.
copy-table Copies a table. If no project is specified, then the
currently active project is used.
delete-table Deletes a table in a given dataset. If no project is
specified, then the currently active project is used.
optional arguments:
-h, --help show this help message and exit
--project PROJECT
Load data from a file
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++



To run this sample:

.. code-block:: bash
$ python load_data_from_file.py
usage: load_data_from_file.py [-h] dataset_name table_name source_file_name
Loads data into BigQuery from a local file.
For more information, see the README.md under /bigquery.
Example invocation:
$ python load_data_from_file.py example_dataset example_table example-data.csv
The dataset and table should already exist.
positional arguments:
dataset_name
table_name
source_file_name Path to a .csv file to upload.
optional arguments:
-h, --help show this help message and exit
Load data from Cloud Storage
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++



To run this sample:

.. code-block:: bash
$ python load_data_from_gcs.py
usage: load_data_from_gcs.py [-h] dataset_name table_name source
Loads data into BigQuery from an object in Google Cloud Storage.
For more information, see the README.md under /bigquery.
Example invocation:
$ python load_data_from_gcs.py example_dataset example_table gs://example-bucket/example-data.csv
The dataset and table should already exist.
positional arguments:
dataset_name
table_name
source The Google Cloud Storage object to load. Must be in the format
gs://bucket_name/object_name
optional arguments:
-h, --help show this help message and exit
Load streaming data
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++



To run this sample:

.. code-block:: bash
$ python stream_data.py
usage: stream_data.py [-h] dataset_name table_name json_data
Loads a single row of data directly into BigQuery.
For more information, see the README.md under /bigquery.
Example invocation:
$ python stream_data.py example_dataset example_table '["Gandalf", 2000]'
The dataset and table should already exist.
positional arguments:
dataset_name
table_name
json_data The row to load into BigQuery as an array in JSON format.
optional arguments:
-h, --help show this help message and exit
Export data to Cloud Storage
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++



To run this sample:

.. code-block:: bash
$ python export_data_to_gcs.py
usage: export_data_to_gcs.py [-h] dataset_name table_name destination
Exports data from BigQuery to an object in Google Cloud Storage.
For more information, see the README.md under /bigquery.
Example invocation:
$ python export_data_to_gcs.py example_dataset example_table gs://example-bucket/example-data.csv
The dataset and table should already exist.
positional arguments:
dataset_name
table_name
destination The desintation Google Cloud Storage object.Must be in the
format gs://bucket_name/object_name
optional arguments:
-h, --help show this help message and exit
The client library
-------------------------------------------------------------------------------

This sample uses the `Google Cloud Client Library for Python`_.
You can read the documentation for more details on API usage and use GitHub
to `browse the source`_ and `report issues`_.

.. Google Cloud Client Library for Python:
https://googlecloudplatform.github.io/google-cloud-python/
.. browse the source:
https://github.com/GoogleCloudPlatform/google-cloud-python
.. report issues:
https://github.com/GoogleCloudPlatform/google-cloud-python/issues
.. _Google Cloud SDK: https://cloud.google.com/sdk/
43 changes: 43 additions & 0 deletions samples/snippets/README.rst.in
Original file line number Diff line number Diff line change
@@ -0,0 +1,43 @@
# This file is used to generate README.rst

product:
name: Google BigQuery
short_name: BigQuery
url: https://cloud.google.com/bigquery/docs
description: >
`Google BigQuery`_ is Google's fully managed, petabyte scale, low cost
analytics data warehouse. BigQuery is NoOps—there is no infrastructure to
manage and you don't need a database administrator—so you can focus on
analyzing data to find meaningful insights, use familiar SQL, and take
advantage of our pay-as-you-go model.

setup:
- auth
- install_deps

samples:
- name: Quickstart
file: quickstart.py
- name: Sync query
file: sync_query.py
show_help: true
- name: Async query
file: async_query.py
show_help: true
- name: Snippets
file: snippets.py
show_help: true
- name: Load data from a file
file: load_data_from_file.py
show_help: true
- name: Load data from Cloud Storage
file: load_data_from_gcs.py
show_help: true
- name: Load streaming data
file: stream_data.py
show_help: true
- name: Export data to Cloud Storage
file: export_data_to_gcs.py
show_help: true

cloud_client_library: true
2 changes: 1 addition & 1 deletion samples/snippets/async_query.py
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@
For more information, see the README.md under /bigquery.
Example invocation:
$ python async_query.py \
$ python async_query.py \\
'SELECT corpus FROM `publicdata.samples.shakespeare` GROUP BY corpus'
"""

Expand Down
Loading

0 comments on commit 28704e0

Please sign in to comment.