forked from apache/superset
-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
0.26 #1
Open
invenis-paris
wants to merge
870
commits into
invenis-paris:master
Choose a base branch
from
apache:0.26
base: master
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
0.26 #1
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
* Fix deepequality logic Zeroed-in on this while a Deck Scatter Plot chart was prompting for refresh on-load. I'm pretty sure using JSON.stringify as a proxy for deep equality is wrong. Not sure how to handle yarn.lock changes here. The changes on yarn.lock are the result of "only" running `yarn add deep-equal` * Adressing comments
* Allowing limit ordering by post-aggregation metrics * don't overwrite og dictionaries * update tests * python3 compat * code review comments, add tests, implement it in groupby as well * python 3 compat for unittest * more self * Throw exception when get aggregations is called with postaggs * Treat adhoc metrics as another aggregation
superset appends DRUID_TZ info to intervals but not to granularity which causes one day's data return as 2 days. This fix is also pass DRUID_TZ to granularity.
* [bugfix] convert metrics to numeric in dataframe It appears sometimes the dbapi driver and pandas's read_sql fail at returning the proper numeric types for metrics and they show up as `object` in the dataframe. This results in "No numeric types to aggregate" errors when trying to perform aggregations or pivoting in pandas. This PR looks for metrics in dataframes that are typed as "object" and uses pandas' to_numeric to convert. * Fix tests * Remove all iteritems
* Disabled run query button if sql query editor is empty * Removing unnecessary white space * Fix failing test for sql props * Adding sql variable into propTypes and defaultProps
* [BUGFIX]: Java scripts max int is 2^53 - 1 longs are bigger and frequently used as IDs this is a hacky fix. * Keep tuple as tuple
* [sql lab] preserve schema through visualize flow #4696 got tangled into refactoring views out of views/core.py and onto views/sql_lab.py This is the same PR without the refactoring. * Fix lint
Default is one hour (3600), also this entry makes the setting a bit more discoverable http://flask-wtf.readthedocs.io/en/stable/config.html?highlight=csrf
* [deck_multi] fixing issues with deck_multi * removing eslint comment
* added context to templates to be able to use filters in SQL * respect slice timeout in favor of others * added trailing comma
Let's make the defaults less crowded on the axes by not showing the min and max values on the axes (bounds)
* Make annotation work with brush * Add dispatch for events * Fix lint * Use xScale.clamp
* Add ISO duration to time grains * Use ISO duration * Remove debugging code * Add module to yarn.lock * Remove autolint * Druid granularity as ISO * Remove dangling comma
… key is a string (#4768) * Fix how the annotation layer interpretes the timestamp string without timezone info; use it as UTC * [Bug fix] Fixed/Refactored annotation layer code so that non-timeseries annotations are applied based on the updated chart object after adding all data * [Bug fix] Fixed/Refactored annotation layer code so that non-timeseries annotations are applied based on the updated chart object after adding all data * Fixed indentation * Fix the key string value in case series.key is a string * Fix the key string value in case series.key is a string
* [dashboard] open in edit mode when adding a chart * Move dashboard unit tests to their own file * fix tests * Better URL management
* Provide much needed module header for the controls.jsx module * Typos
* [line] fix verbose names in time shift * Addressing comments
* Improved granularity parsing * Add unit tests * Explicit cast to int * Screengrid play slider * Clean code * Refactor common code
Add Ascendica Development in organizations list who use Superset
* Improve xAxis ticks, thinner bottom margin * Moving utils folder * Add isTruthy
Fixing issues where y_axis_format is set and not num_period_compare
#5253) * Revert "[sqllab] Fix sql lab resolution link (#5216)" This reverts commit 93cdf60. * Revert "Pin botocore version (#5184)" This reverts commit 70679d4. * Revert "Describe the use of custom OAuth2 authorization servers (#5220)" This reverts commit a84f430. * Revert "[bubble-chart] Fixing issue w/ metric names (#5237)" This reverts commit 5c106b9. * Revert "[adhoc-filters] Adding adhoc-filters to all viz types (#5206)" This reverts commit d483ed1. * Revert "[perf] add webpack 4 + SplitChunks + lazy load visualizations (#5240)" This reverts commit 1fc4ee0.
* Allow owners to view their own dashboards * Update docstring * update sm variable * Add unit test * misc linter
When receiving a VARBINARY field out of Presto, it shows up as type `bytes` out of the pyhive driver. Then the pre 3.15 version of simplejson attempts to convert it to utf8 by default and it craps out. I bumped to simplejson>=3.25.0 and set `encoding=None` as documented here https://simplejson.readthedocs.io/en/latest/#basic-usage so that we can handle bytes on our own.
* [bugfix] add support for numeric nodes in Sankey closes #5142 * lint
(cherry picked from commit bd24f85)
(cherry picked from commit b0eee12)
(cherry picked from commit fb988fe)
* Improve database type inference Python's DBAPI isn't super clear and homogeneous on the cursor.description specification, and this PR attempts to improve inferring the datatypes returned in the cursor. This work started around Presto's TIMESTAMP type being mishandled as string as the database driver (pyhive) returns it as a string. The work here fixes this bug and does a better job at inferring MySQL and Presto types. It also creates a new method in db_engine_specs allowing for other databases engines to implement and become more precise on type-inference as needed. * Fixing tests * Adressing comments * Using infer_objects * Removing faulty line * Addressing PrestoSpec redundant method comment * Fix rebase issue * Fix tests (cherry picked from commit 777d876)
Seeing UnicodeDecodeError on our build system running py3.6, though I couldn't reproduce on my local 3.6. This fix addresses the issue. (cherry picked from commit 885d779)
* raise errors with null values * linting * linting some more * use get * change ordering * linting (cherry picked from commit 089037f)
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
No description provided.