-
Notifications
You must be signed in to change notification settings - Fork 4.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
🐞 Destination databricks: update jdbc driver to patch log4j #7622
Conversation
/test connector=connectors/destination-databricks
|
/test connector=connectors/destination-databricks
|
/test connector=connectors/destination-databricks
|
/test connector=connectors/destination-databricks
|
@davinchia, I manually created an |
@tuliren yes we should. You can either do it on your own, or create a ticket for the infrastructure team to help you with it. Have you seen https://docs.google.com/document/d/1Xqr9fC1toPlMoj74jb9e5FaTMjmsBIFd9BY3c3Zv5qc/edit#bookmark=id.ik4m6mwqpw5v on how to work with the infrastructure team? |
Question from me:
|
I granted this service account the read permission to this bucket on the permission page on GCS.
|
CloudRepo is the Java hosting service we are using: cloudrepo.io. Login credentials are in Lastpass under Got it. I didn't realise we want to keep this private. If you aren't rushing, I think I would prefer creating a private repository in CloudRepo and set things up so we publish and pull from there. Main thing here is to keep all the java artifacts in one place. I can take a stab at this on Monday/Tuesday if you aren't free. What do you think? |
Got it. Thanks. I think I will just change it to download from Databricks each time. That does not seem to be breaking the terms & conditions. It's not worth the time to be too careful. But thank you all the same. |
Okay! Happy to relook when you make those changes. Thanks Liren. |
736c0d6
to
76315db
Compare
/test connector=connectors/destination-databricks
|
/test connector=connectors/destination-databricks
|
@@ -77,7 +77,7 @@ public StandardCheckConnectionOutput run(final StandardCheckConnectionInput inpu | |||
LOGGER.debug("Check connection job received output: {}", output); | |||
return output; | |||
} else { | |||
throw new WorkerException("Error while getting checking connection."); | |||
throw new WorkerException(String.format("Error checking connection, status: %s, exit code: %d", status, exitCode)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jrhizor, a check operation failed in the acceptance test. I added more information to the exception message and saw this:
Error checking connection, status: Optional[io.airbyte.protocol.models.AirbyteConnectionStatus@605d010e[status=SUCCEEDED,message=<null>,additionalProperties=***]], exit code: 143
Do you know what might be the root cause behind this exit code? Could it be related to the recent change in WorkerUtils#gentleClose
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think the gentle close changes look related. Are you sure this docker container isn't actually outputting an error code?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here is what happens:
- The database connection somehow requires an explicit
close
method call, otherwise, the connector will not shutdown. - The worker will wait for 1 minute for the connector to shutdown, otherwise it will close it by force (code here) by sending a
SIGTERM
. - The
SIGTERM
results in a 143 exit code.
So the root cause is that we must always close the database at the end for Databricks, which was not the case in the default check command implementation in CopyDestination
. The issue has been resolved.
32aa7bb
to
6b5d420
Compare
/test connector=connectors/destination-databricks
|
What
test
andpublish
commands.