Skip to content

Commit

Permalink
Merge branch 'master' of https://github.com/apache/superset into pexd…
Browse files Browse the repository at this point in the history
…ax/db-connection-ui
  • Loading branch information
hughhhh committed Jun 10, 2021
2 parents cbbb257 + f8b270d commit 5eea4f3
Show file tree
Hide file tree
Showing 60 changed files with 2,434 additions and 1,607 deletions.
4 changes: 2 additions & 2 deletions .github/workflows/superset-python-presto-hive.yml
Original file line number Diff line number Diff line change
Expand Up @@ -73,7 +73,7 @@ jobs:
setup-postgres
- name: Run celery
if: steps.check.outcome == 'failure'
run: celery worker --app=superset.tasks.celery_app:app -Ofair -c 2 &
run: celery --app=superset.tasks.celery_app:app worker -Ofair -c 2 &
- name: Python unit tests (PostgreSQL)
if: steps.check.outcome == 'failure'
run: |
Expand Down Expand Up @@ -148,7 +148,7 @@ jobs:
setup-postgres
- name: Run celery
if: steps.check.outcome == 'failure'
run: celery worker --app=superset.tasks.celery_app:app -Ofair -c 2 &
run: celery --app=superset.tasks.celery_app:app worker -Ofair -c 2 &
- name: Python unit tests (PostgreSQL)
if: steps.check.outcome == 'failure'
run: |
Expand Down
6 changes: 3 additions & 3 deletions .github/workflows/superset-python-unittest.yml
Original file line number Diff line number Diff line change
Expand Up @@ -62,7 +62,7 @@ jobs:
setup-mysql
- name: Run celery
if: steps.check.outcome == 'failure'
run: celery worker --app=superset.tasks.celery_app:app -Ofair -c 2 &
run: celery --app=superset.tasks.celery_app:app worker -Ofair -c 2 &
- name: Python unit tests (MySQL)
if: steps.check.outcome == 'failure'
run: |
Expand Down Expand Up @@ -126,7 +126,7 @@ jobs:
setup-postgres
- name: Run celery
if: steps.check.outcome == 'failure'
run: celery worker --app=superset.tasks.celery_app:app -Ofair -c 2 &
run: celery --app=superset.tasks.celery_app:app worker -Ofair -c 2 &
- name: Python unit tests (PostgreSQL)
if: steps.check.outcome == 'failure'
run: |
Expand Down Expand Up @@ -182,7 +182,7 @@ jobs:
mkdir ${{ github.workspace }}/.temp
- name: Run celery
if: steps.check.outcome == 'failure'
run: celery worker --app=superset.tasks.celery_app:app -Ofair -c 2 &
run: celery --app=superset.tasks.celery_app:app worker -Ofair -c 2 &
- name: Python unit tests (SQLite)
if: steps.check.outcome == 'failure'
run: |
Expand Down
2 changes: 1 addition & 1 deletion CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -1264,7 +1264,7 @@ To do this, you'll need to:
- Start up a celery worker
```shell script
celery worker --app=superset.tasks.celery_app:app -Ofair
celery --app=superset.tasks.celery_app:app worker -Ofair
```
Note that:
Expand Down
4 changes: 2 additions & 2 deletions docker/docker-bootstrap.sh
Original file line number Diff line number Diff line change
Expand Up @@ -38,10 +38,10 @@ fi

if [[ "${1}" == "worker" ]]; then
echo "Starting Celery worker..."
celery worker --app=superset.tasks.celery_app:app -Ofair -l INFO
celery --app=superset.tasks.celery_app:app worker -Ofair -l INFO
elif [[ "${1}" == "beat" ]]; then
echo "Starting Celery beat..."
celery beat --app=superset.tasks.celery_app:app --pidfile /tmp/celerybeat.pid -l INFO -s "${SUPERSET_HOME}"/celerybeat-schedule
celery --app=superset.tasks.celery_app:app beat --pidfile /tmp/celerybeat.pid -l INFO -s "${SUPERSET_HOME}"/celerybeat-schedule
elif [[ "${1}" == "app" ]]; then
echo "Starting web app..."
flask run -p 8088 --with-threads --reload --debugger --host=0.0.0.0
Expand Down
4 changes: 2 additions & 2 deletions docs/installation.rst
Original file line number Diff line number Diff line change
Expand Up @@ -1080,11 +1080,11 @@ have the same configuration.
* To start a Celery worker to leverage the configuration run: ::

celery worker --app=superset.tasks.celery_app:app --pool=prefork -O fair -c 4
celery --app=superset.tasks.celery_app:app worker --pool=prefork -O fair -c 4

* To start a job which schedules periodic background jobs, run ::

celery beat --app=superset.tasks.celery_app:app
celery --app=superset.tasks.celery_app:app beat

To setup a result backend, you need to pass an instance of a derivative
of ``from cachelib.base.BaseCache`` to the ``RESULTS_BACKEND``
Expand Down
4 changes: 2 additions & 2 deletions docs/src/pages/docs/installation/alerts_reports.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -246,7 +246,7 @@ services:
- superset
- postgres
- redis
command: "celery worker --app=superset.tasks.celery_app:app --pool=gevent --concurrency=500"
command: "celery --app=superset.tasks.celery_app:app worker --pool=gevent --concurrency=500"
volumes:
- ./config/:/app/pythonpath/
beat:
Expand All @@ -258,7 +258,7 @@ services:
- superset
- postgres
- redis
command: "celery beat --app=superset.tasks.celery_app:app --pidfile /tmp/celerybeat.pid --schedule /tmp/celerybeat-schedule"
command: "celery --app=superset.tasks.celery_app:app beat --pidfile /tmp/celerybeat.pid --schedule /tmp/celerybeat-schedule"
volumes:
- ./config/:/app/pythonpath/
superset:
Expand Down
4 changes: 2 additions & 2 deletions docs/src/pages/docs/installation/async_queries_celery.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -57,13 +57,13 @@ CELERY_CONFIG = CeleryConfig
To start a Celery worker to leverage the configuration, run the following command:

```
celery worker --app=superset.tasks.celery_app:app --pool=prefork -O fair -c 4
celery --app=superset.tasks.celery_app:app worker --pool=prefork -O fair -c 4
```

To start a job which schedules periodic background jobs, run the following command:

```
celery beat --app=superset.tasks.celery_app:app
celery --app=superset.tasks.celery_app:app beat
```

To setup a result backend, you need to pass an instance of a derivative of from
Expand Down
164 changes: 164 additions & 0 deletions docs/src/pages/docs/installation/sql_templating.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -87,3 +87,167 @@ FEATURE_FLAGS = {

The available validators and names can be found in
[sql_validators](https://github.com/apache/superset/tree/master/superset/sql_validators).

### Available Macros

In this section, we'll walkthrough the pre-defined Jinja macros in Superset.

**Current Username**

The `{{ current_username() }}` macro returns the username of the currently logged in user.

If you have caching enabled in your Superset configuration, then by defaul the the `username` value will be used
by Superset when calculating the cache key. A cache key is a unique identifer that determines if there's a
cache hit in the future and Superset can retrieve cached data.

You can disable the inclusion of the `username` value in the calculation of the
cache key by adding the following parameter to your Jinja code:

```
{{ current_username(add_to_cache_keys=False) }}
```

**Current User ID**

The `{{ current_user_id()}}` macro returns the user_id of the currently logged in user.

If you have caching enabled in your Superset configuration, then by defaul the the `user_id` value will be used
by Superset when calculating the cache key. A cache key is a unique identifer that determines if there's a
cache hit in the future and Superset can retrieve cached data.

You can disable the inclusion of the `user_id` value in the calculation of the
cache key by adding the following parameter to your Jinja code:

```
{{ current_user_id(add_to_cache_keys=False) }}
```

**Custom URL Parameters**

The `{{ url_param('custom_variable') }}` macro lets you define arbitrary URL
parameters and reference them in your SQL code.

Here's a concrete example:

- You write the following query in SQL Lab:

```
SELECT count(*)
FROM ORDERS
WHERE country_code = '{{ url_param('countrycode') }}'
```

- You're hosting Superset at the domain www.example.com and you send your
coworker in Spain the following SQL Lab URL `www.example.com/superset/sqllab?countrycode=ES`
and your coworker in the USA the following SQL Lab URL `www.example.com/superset/sqllab?countrycode=US`
- For your coworker in Spain, the SQL Lab query will be rendered as:

```
SELECT count(*)
FROM ORDERS
WHERE country_code = 'ES'
```

- For your coworker in the USA, the SQL Lab query will be rendered as:

```
SELECT count(*)
FROM ORDERS
WHERE country_code = 'US'
```

**Explicitly Including Values in Cache Key**

The `{{ cache_key_wrapper() }}` function explicitly instructs Superset to add a value to the
accumulated list of values used in the the calculation of the cache key.

This function is only needed when you want to wrap your own custom function return values
in the cache key. You can gain more context
[here](https://github.com/apache/superset/blob/efd70077014cbed62e493372d33a2af5237eaadf/superset/jinja_context.py#L133-L148).

Note that this function powers the caching of the `user_id` and `username` values
in the `current_user_id()` and `current_username()` function calls (if you have caching enabled).

**Filter Values**

You can retrieve the value for a specific filter as a list using `{{ filter_values() }}`.

This is useful if:
- you want to use a filter component to filter a query where the name of filter component column doesn't match the one in the select statement
- you want to have the ability for filter inside the main query for speed purposes

Here's a concrete example:

```
SELECT action, count(*) as times
FROM logs
WHERE
action in ({{ "'" + "','".join(filter_values('action_type')) + "'" }})
GROUP BY action
```

You can use thisfeature to reference the start & end datetimes from a time filter using:

- `{{ from_dttm }}`: start datetime value
- `{{ to_dttm }}`: end datetime value

**Filters for a Specific Column**

The `{{ get_filters() }}` macro returns the filters applied to a given column. In addition to
returning the values (similar to how `filter_values()` does), the `get_filters()` macro
returns the operator specified in the Explore UI.

This is useful if:
- you want to handle more than the IN operator in your SQL clause
- you want to handle generating custom SQL conditions for a filter
- you want to have the ability to filter inside the main query for speed purposes

Here's a concrete example:

```
WITH RECURSIVE
superiors(employee_id, manager_id, full_name, level, lineage) AS (
SELECT
employee_id,
manager_id,
full_name,
1 as level,
employee_id as lineage
FROM
employees
WHERE
1=1
{# Render a blank line #}
{%- for filter in get_filters('full_name', remove_filter=True) -%}
{%- if filter.get('op') == 'IN' -%}
AND
full_name IN ( {{ "'" + "', '".join(filter.get('val')) + "'" }} )
{%- endif -%}
{%- if filter.get('op') == 'LIKE' -%}
AND
full_name LIKE {{ "'" + filter.get('val') + "'" }}
{%- endif -%}
{%- endfor -%}
UNION ALL
SELECT
e.employee_id,
e.manager_id,
e.full_name,
s.level + 1 as level,
s.lineage
FROM
employees e,
superiors s
WHERE s.manager_id = e.employee_id
)
SELECT
employee_id, manager_id, full_name, level, lineage
FROM
superiors
order by lineage, level
```
2 changes: 1 addition & 1 deletion helm/superset/Chart.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ maintainers:
- name: craig-rueda
email: craig@craigrueda.com
url: https://github.com/craig-rueda
version: 0.1.3
version: 0.1.4
dependencies:
- name: postgresql
version: 10.2.0
Expand Down
2 changes: 1 addition & 1 deletion helm/superset/values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -203,7 +203,7 @@ supersetCeleryBeat:
command:
- "/bin/sh"
- "-c"
- ". {{ .Values.configMountPath }}/superset_bootstrap.sh; celery beat --app=superset.tasks.celery_app:app --pidfile /tmp/celerybeat.pid --schedule /tmp/celerybeat-schedule"
- ". {{ .Values.configMountPath }}/superset_bootstrap.sh; celery --app=superset.tasks.celery_app:app beat --pidfile /tmp/celerybeat.pid --schedule /tmp/celerybeat-schedule"
forceReload: false # If true, forces deployment to reload on each upgrade
initContainers:
- name: wait-for-postgres
Expand Down
Loading

0 comments on commit 5eea4f3

Please sign in to comment.