Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

read_sql_query converts empty columns to object with no way to override #26682

Closed
lengau opened this issue Jun 5, 2019 · 4 comments
Closed

Comments

@lengau
Copy link

lengau commented Jun 5, 2019

Code Sample, a copy-pastable example if possible

import pandas
import numpy
import sqlite3
con = sqlite3.connect('example.db')
df = pandas.DataFrame(
    {
        'filled': [1.0, 2.0, 3.0],
        'partial': [1.0, numpy.nan, numpy.nan],
        'empty': [numpy.nan] * 3
    }
)
df.to_sql('test', con, index=False)
returned_df = pandas.read_sql_query('select * from test', con)

for col in ['filled', 'partial', 'empty']:
    assert df[col].dtype == returned_df[col].dtype, f'Problematic column: {col}, Expected dtype: {df[col].dtype}, actual dtype: {returned_df[col].dtype}'

Problem description

When a float column queried from a SQL database contains NULL, the type of that column in the resulting DataFrame is object, filled with None values ('empty' column in the example). NULL values in a column containing at least one non-NULL row ('partial' column in the example) are converted to float64 dtype, as expected.

This itself is not necessarily an issue, but without a way to assign dtypes to a column, some future uses of the column (such as joining to another numeric column) raise an exception unless the column type is checked and (if necessary) changed. In read_csv and read_excel, both the dtype parameter and the converters parameter are capable of the expected behaviour, so implementing one or both of these in read_sql_query (and probably read_sql_table, which I believe has the same issue) would provide a reasonable workaround.

Output of pd.show_versions()

INSTALLED VERSIONS

commit: None
python: 3.7.3.final.0
python-bits: 64
OS: Linux
OS-release: 4.19.34-04457-g5b63d4390e96
machine: x86_64
processor:
byteorder: little
LC_ALL: None
LANG: en_US.UTF-8
LOCALE: en_US.UTF-8

pandas: 0.24.2
pytest: None
pip: 19.1.1
setuptools: 41.0.1
Cython: None
numpy: 1.16.3
scipy: None
pyarrow: None
xarray: None
IPython: 7.5.0
sphinx: None
patsy: None
dateutil: 2.8.0
pytz: 2019.1
blosc: None
bottleneck: None
tables: None
numexpr: None
feather: None
matplotlib: None
openpyxl: None
xlrd: None
xlwt: None
xlsxwriter: None
lxml.etree: None
bs4: None
html5lib: None
sqlalchemy: None
pymysql: None
psycopg2: None
jinja2: 2.10.1
s3fs: None
fastparquet: None
pandas_gbq: None
pandas_datareader: None
gcsfs: None

@TomAugspurger
Copy link
Contributor

Thanks for nice reproducible example.

When you do a sql query, pandas currently doesn't know the dtypes of the output query. So we need to infer them from the values. All None is inferred as object

In [12]: pandas.DataFrame({"A": [None, None]}).dtypes
Out[12]:
A    object
dtype: object

There are other issues about adding a dtype keyword to read_sql_query to allow users to specify this. Alternatively, pandas does know the types when you just read the entire table, since the database can tell us.

In [9]: engine = sqlalchemy.create_engine("sqlite:///example.db")

In [10]: engine
Out[10]: Engine(sqlite:///example.db)

In [11]: pandas.read_sql_table("test", engine)
Out[11]:
   filled  partial  empty
0     1.0      1.0    NaN
1     2.0      NaN    NaN
2     3.0      NaN    NaN

@TomAugspurger TomAugspurger added this to the No action milestone Jun 6, 2019
@lengau
Copy link
Author

lengau commented Jun 6, 2019

Can you expand on what other issues there are with adding a dtype keyword? (And what about adding converters?) From what I can tell read_sql_query uses DataFrame.from_records, which doesn't currently have a dtype keyword, but seems to have a reasonably straightforward way to implement one.

@TomAugspurger
Copy link
Contributor

TomAugspurger commented Jun 6, 2019 via email

@lengau
Copy link
Author

lengau commented Jun 6, 2019

Thanks! I'll definitely give it a go. (Sorry to make a dupe - when I searched the issue tracker I never came across that one.) Worst case, I can write up what issues I actually had trying to implement it so possible future attempts are better informed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants