Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Tests] IT tests and test utils update to fix failing tests for serverless #2902

Merged
merged 1 commit into from
Aug 13, 2024

Conversation

manasvinibs
Copy link
Member

@manasvinibs manasvinibs commented Aug 6, 2024

Description

Some of the IT tests are failing in serverless even though data exist and functionality working as expected. RCA for such failures are sharding differences between data sources Leading to different ranking and different order in the schema fields in the response.

  1. So instead of hardcoding the expected schema order in the assertions, in this change I'm dynamically checking the schema names with its values during assertion to make sure if schema order is different in the responses due to different sharding difference, we are still able to check the correctness of the tests.
  2. Sorting the data rows before assertion to make sure different data sources can have data in any order if sort is not applied and still have the correct data init.

Related Issues

Resolves #[Issue number to be closed when this PR is merged]

Check List

  • New functionality includes testing.
  • New functionality has been documented.
  • New functionality has javadoc added.
  • New functionality has a user manual doc added.
  • API changes companion pull request created.
  • Commits are signed per the DCO using --signoff.
  • Public documentation issue/PR created.

By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
For more information on following Developer Certificate of Origin and signing off your commits, please check here.

}

// Verify consecutive deduplication and create the expected rows
private List<Object[]> getExpectedDedupRows(List<Object[]> actualRows) {
Copy link
Collaborator

@seankao-az seankao-az Aug 8, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not quite sure about this..
is the data rows for dedup test case not being sorted causing failure in IT? because this seems like actual coding logic for dedup, instead of providing pre-validated expected value (like the original verifyDataRows(result, rows(true), rows(false), rows(true), rows(false))

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No the failure was not because of the sorted value but the size of the data rows in the responses. Here we were hardcoding the expected response to be of size 4 but different data sources having different number of rows even though are de-duplicated, due to this hard coded check tests would fail. So instead of hard coding assertions, we can check the deduplication of the rows based on the response data.

…rless

Signed-off-by: Manasvini B S <manasvis@amazon.com>
Copy link

codecov bot commented Aug 9, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 94.50%. Comparing base (4a735ea) to head (8ba5917).

Additional details and impacted files
@@            Coverage Diff            @@
##               main    #2902   +/-   ##
=========================================
  Coverage     94.50%   94.50%           
  Complexity     5235     5235           
=========================================
  Files           515      515           
  Lines         14791    14791           
  Branches        978      978           
=========================================
  Hits          13978    13978           
  Misses          772      772           
  Partials         41       41           
Flag Coverage Δ
sql-engine 94.50% <ø> (ø)

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

Copy link
Collaborator

@ykmr1224 ykmr1224 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

@ykmr1224 ykmr1224 added backport 2.x maintenance Improves code quality, but not the product labels Aug 13, 2024
@ykmr1224 ykmr1224 merged commit 6972487 into opensearch-project:main Aug 13, 2024
16 of 18 checks passed
@opensearch-trigger-bot
Copy link
Contributor

The backport to 2.x failed:

The process '/usr/bin/git' failed with exit code 128

To backport manually, run these commands in your terminal:

# Navigate to the root of your repository
cd $(git rev-parse --show-toplevel)
# Fetch latest updates from GitHub
git fetch
# Create a new working tree
git worktree add ../.worktrees/sql/backport-2.x 2.x
# Navigate to the new working tree
pushd ../.worktrees/sql/backport-2.x
# Create a new branch
git switch --create backport/backport-2902-to-2.x
# Cherry-pick the merged commit of this pull request and resolve the conflicts
git cherry-pick -x --mainline 1 6972487e855599745df909292d699d37f82282c7
# Push it to GitHub
git push --set-upstream origin backport/backport-2902-to-2.x
# Go back to the original working tree
popd
# Delete the working tree
git worktree remove ../.worktrees/sql/backport-2.x

Then, create a pull request where the base branch is 2.x and the compare/head branch is backport/backport-2902-to-2.x.

manasvinibs added a commit to manasvinibs/sql that referenced this pull request Aug 14, 2024
…rless (opensearch-project#2902)

Signed-off-by: Manasvini B S <manasvis@amazon.com>
jzonthemtn pushed a commit to jzonthemtn/sql that referenced this pull request Aug 28, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
backport 2.x backport-failed maintenance Improves code quality, but not the product
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants