Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(sip-68): Add DatasourceDAO class to manage querying different datasources easier #20030

Merged
merged 4 commits into from
May 13, 2022
Merged
Show file tree
Hide file tree
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
146 changes: 146 additions & 0 deletions superset/dao/datasource/dao.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,146 @@
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.

from typing import Any, Dict, List, Optional, Set, Union

from flask_babel import _
from sqlalchemy import or_
from sqlalchemy.orm import Session, subqueryload
from sqlalchemy.orm.exc import NoResultFound

from superset.connectors.sqla.models import SqlaTable
from superset.dao.base import BaseDAO
from superset.datasets.commands.exceptions import DatasetNotFoundError
from superset.datasets.models import Dataset
from superset.models.core import Database
from superset.models.sql_lab import Query, SavedQuery
from superset.tables.models import Table
from superset.utils.core import DatasourceType

Datasource = Union[Dataset, SqlaTable, Table, Query, SavedQuery, Any]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Datasource = Union[Dataset, SqlaTable, Table, Query, SavedQuery, Any]
Datasource = Union[Dataset, SqlaTable, Table, Query, SavedQuery]

We don't want Datasource to be of type Any — on the contrary, it's really nice that the type clearly defines what constitutes a datasource.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@betodealmeida this is a hack to allow the types to play nice with mypy, the good thing is it still does some type checking. I was going to look into this more, but literally spent most of the day looking into it and had no luck. I can revisit this if you think it's a mandatory thing

Copy link
Member

@betodealmeida betodealmeida May 12, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, if you need to add Any to make mypy pass it's because something is wrong. Adding Any never fixes anything, it only makes the type checker happy. In this case it defeats the purpose of type checking, because all objects are of type Any.

Was this fixed by using Type?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do you think we should keep the Datasource type and DatasourceType enum in the same file so that if one changes, we can be sure to change the other?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yea i'll move the Datasource reference next to the enum



class DatasourceDAO(BaseDAO):

sources: Dict[DatasourceType, Datasource] = {
DatasourceType.SQLATABLE: SqlaTable,
DatasourceType.QUERY: Query,
DatasourceType.SAVEDQUERY: SavedQuery,
DatasourceType.DATASET: Dataset,
DatasourceType.TABLE: Table,
}

@classmethod
def get_datasource(
cls, datasource_type: DatasourceType, datasource_id: int, session: Session
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Small nit, but it's nice to standardize: in your other methods the session is the first argument, in this one it's the last. I'd move it to first argument here, for consistency.

) -> Datasource:
if datasource_type not in cls.sources:
raise DatasetNotFoundError()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should create a custom exception here (DatasourceTypeNotSupportedError, for example).


datasource = (
session.query(cls.sources[datasource_type])
.filter_by(id=datasource_id)
.one_or_none()
)

if not datasource:
raise DatasetNotFoundError()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same thing here re what Beto said above. This could be a missing Table or Query.


return datasource

@classmethod
def get_all_datasources(cls, session: Session) -> List[Datasource]:
datasources: List[Datasource] = []
for source_class in DatasourceDAO.sources.values():
qry = session.query(source_class)
if isinstance(source_class, SqlaTable):
qry = source_class.default_query(qry)
datasources.extend(qry.all())
return datasources
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we can just limit this to just SqlaTable since no charts or dashboards will be able to be saved with any other datasources


@classmethod
def get_datasource_by_name( # pylint: disable=too-many-arguments
cls,
session: Session,
datasource_type: DatasourceType,
datasource_name: str,
schema: str,
database_name: str,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A similar small nit here: in general schema comes after database in our functions/methods (since they are hierarchical), I would swap the order for consistency.

) -> Optional[Datasource]:
datasource_class = DatasourceDAO.sources[datasource_type]
if isinstance(datasource_class, SqlaTable):
return datasource_class.get_datasource_by_name(
session, datasource_name, schema, database_name
)
return None

@classmethod
def query_datasources_by_permissions( # pylint: disable=invalid-name
cls,
session: Session,
database: Database,
permissions: Set[str],
schema_perms: Set[str],
) -> List[Datasource]:
# TODO(bogdan): add unit test
Copy link
Member

@betodealmeida betodealmeida May 11, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Don't leave TODOs for other people! :)

Suggested change
# TODO(bogdan): add unit test
# TODO(hughhhh): add unit test

Or remove it altogether.

datasource_class = DatasourceDAO.sources[DatasourceType[database.type]]
if isinstance(datasource_class, SqlaTable):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit, if you invert the if you can remove an indentation level:

if not isinstance(datasource_class, SqlaTable):
    return []

return (
    session.query(...)
    ...
)

return (
session.query(datasource_class)
.filter_by(database_id=database.id)
.filter(
or_(
datasource_class.perm.in_(permissions),
datasource_class.schema_perm.in_(schema_perms),
)
)
.all()
)
return []

@classmethod
def get_eager_datasource(
cls, session: Session, datasource_type: str, datasource_id: int
) -> Optional[Datasource]:
"""Returns datasource with columns and metrics."""
datasource_class = DatasourceDAO.sources[DatasourceType[datasource_type]]
if isinstance(datasource_class, SqlaTable):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same here, if you invert the if the code is a bit easier to read:

if not isinstance(datasource_class, SqlaTable):
    return None

return (...)

return (
session.query(datasource_class)
.options(
subqueryload(datasource_class.columns),
subqueryload(datasource_class.metrics),
)
.filter_by(id=datasource_id)
.one()
)
return None

@classmethod
def query_datasources_by_name(
cls,
session: Session,
database: Database,
datasource_name: str,
schema: Optional[str] = None,
) -> List[Datasource]:
datasource_class = DatasourceDAO.sources[DatasourceType[database.type]]
if isinstance(datasource_class, SqlaTable):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ditto here.

Copy link
Member

@eschutho eschutho May 12, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why only sqlatable on these? Is the plan to add in the rest when we need them?

return datasource_class.query_datasources_by_name(
session, database, datasource_name, schema=schema
)
return []
181 changes: 181 additions & 0 deletions tests/unit_tests/dao/datasource_test.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,181 @@
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.

import pytest
from sqlalchemy.orm.session import Session

from superset.utils.core import DatasourceType


def create_test_data(session: Session) -> None:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can also rewrite this as a fixture:

@pytest.fixture
def session_with_data(session: Session) -> Iterator[Session]:
    ...  # code of `create_test_data` here
    yield session

And then in your tests:

def test_get_datasource_sqlatable(app_context: None, session_with_data: Session) -> None:
    ...

But there's no need, a function like this also works fine.

from superset.columns.models import Column
from superset.connectors.sqla.models import SqlaTable, TableColumn
from superset.datasets.models import Dataset
from superset.models.core import Database
from superset.models.sql_lab import Query, SavedQuery
from superset.tables.models import Table

engine = session.get_bind()
SqlaTable.metadata.create_all(engine) # pylint: disable=no-member

db = Database(database_name="my_database", sqlalchemy_uri="sqlite://")

columns = [
TableColumn(column_name="a", type="INTEGER"),
]

sqla_table = SqlaTable(
table_name="my_sqla_table",
columns=columns,
metrics=[],
database=db,
)

query_obj = Query(
client_id="foo",
database=db,
tab_name="test_tab",
sql_editor_id="test_editor_id",
sql="select * from bar",
select_sql="select * from bar",
executed_sql="select * from bar",
limit=100,
select_as_cta=False,
rows=100,
error_message="none",
results_key="abc",
)

saved_query = SavedQuery(database=db, sql="select * from foo")

table = Table(
name="my_table",
schema="my_schema",
catalog="my_catalog",
database=db,
columns=[],
)

dataset = Dataset(
database=table.database,
name="positions",
expression="""
SELECT array_agg(array[longitude,latitude]) AS position
FROM my_catalog.my_schema.my_table
""",
tables=[table],
columns=[
Column(
name="position",
expression="array_agg(array[longitude,latitude])",
),
],
)

session.add(dataset)
session.add(table)
session.add(saved_query)
session.add(query_obj)
session.add(db)
session.add(sqla_table)
session.flush()


def test_get_datasource_sqlatable(app_context: None, session: Session) -> None:
from superset.connectors.sqla.models import SqlaTable
from superset.dao.datasource.dao import DatasourceDAO

create_test_data(session)

result = DatasourceDAO.get_datasource(
datasource_type=DatasourceType.SQLATABLE, datasource_id=1, session=session
)

assert 1 == result.id
assert "my_sqla_table" == result.table_name
assert isinstance(result, SqlaTable)


def test_get_datasource_query(app_context: None, session: Session) -> None:
from superset.dao.datasource.dao import DatasourceDAO
from superset.models.sql_lab import Query

create_test_data(session)

result = DatasourceDAO.get_datasource(
datasource_type=DatasourceType.QUERY, datasource_id=1, session=session
)

assert result.id == 1
assert isinstance(result, Query)


def test_get_datasource_saved_query(app_context: None, session: Session) -> None:
from superset.dao.datasource.dao import DatasourceDAO
from superset.models.sql_lab import SavedQuery

create_test_data(session)

result = DatasourceDAO.get_datasource(
datasource_type=DatasourceType.SAVEDQUERY, datasource_id=1, session=session
)

assert result.id == 1
assert isinstance(result, SavedQuery)


def test_get_datasource_sl_table(app_context: None, session: Session) -> None:
from superset.dao.datasource.dao import DatasourceDAO
from superset.tables.models import Table

create_test_data(session)

# todo(hugh): This will break once we remove the dual write
# update the datsource_id=1 and this will pass again
result = DatasourceDAO.get_datasource(
datasource_type=DatasourceType.TABLE, datasource_id=2, session=session
)

assert result.id == 2
assert isinstance(result, Table)


def test_get_datasource_sl_dataset(app_context: None, session: Session) -> None:
from superset.dao.datasource.dao import DatasourceDAO
from superset.datasets.models import Dataset

create_test_data(session)

# todo(hugh): This will break once we remove the dual write
# update the datsource_id=1 and this will pass again
result = DatasourceDAO.get_datasource(
datasource_type=DatasourceType.DATASET, datasource_id=2, session=session
)

assert result.id == 2
assert isinstance(result, Dataset)


def test_get_all_datasources(app_context: None, session: Session) -> None:
from superset.dao.datasource.dao import DatasourceDAO

create_test_data(session)

# todo(hugh): This will break once we remove the dual write
# update the assert len(result) == 5 and this will pass again
result = DatasourceDAO.get_all_datasources(session=session)
assert len(result) == 7